Jump to content

wimvb

Members
  • Posts

    8,950
  • Joined

  • Last visited

  • Days Won

    7

Everything posted by wimvb

  1. The crux here is "random in nature". What would the cause of this randomness be, other than seeing and grit in the gears? Random steps in X and random steps in Y will result in a pattern that fills a circle. The size of the steps determines the size of that circle. If the system is mechanical, it can't be tinkered with and optimized, simply because the deviations are random. In fact, any attempt to "fix" such a system by adding corrections, will only result in a wider circle (increased RMS). Many years ago I was assigned the task to improve a process for film deposition. The aim of the process was to deposit a thin hard coating of 100 nm on a substrate. The process involved hands on interactions by the technicians, and one "fix" they came up with was to change the deposition time (in AP, we would call that the exposure time), depending on previous results. They adopted a scheme where they would increase the deposition time if the previous run had resulted in a film that was thinner, and decrease the deposition time when the previous film was thicker than the target. By simple statistical analysis I showed them that the deposition process had a built in randomness, and all they did was increase the variation in film thickness by trying to manipulate it. Had they used a constant deposition time, the variation in thickness would have been less. How does this translate in guiding practices? Quite simple acutally. If the guiding variations you see are not caused by the mechanics, but are truly random, then you shouldn't send any guide pulses. PHD tries to estimate the randomness in its "high frequency star motion" parameters and avoid chasing them through its MinMo and aggressiveness settings. Mechanical deviations (mount movements) are usually much slower than these high frequency motions, and are seldom random in nature, and it's them we try to guide out. The PHD manuals don't say this, but if seeing varies from one session to another, you should run the GA every session (takes only 2 minutes, and measuring backlash isn't needed). Afterwards you should set MinMo to the recommended value and keep an eye on aggressiveness. Keep it low, and exposure time long enough, if seeing is poor.
  2. Have you seen this, Carole? https://noirlab.edu/public/products/fitsliberator/
  3. You need to supply your data if you want help. Either a link to the full dataset or to the unprocessed, stacked image. Supply basic information with the image (camera, telescope, filters used, single sub exposure time, total integration time).
  4. Have patience. ZWO will soon remedy that. (At least, they've always done so in the past.)
  5. This is an easy one, but you'll need a Bahtinov mask. Point to a bright star and focus the main camera with the Bahtinov mask. Then target any bright star with the guide camera and focus the oag/guide cam with the Bahtinov mask. Don't touch the main focuser after focusing the main camera. A dslr and wide bodied guide cam don't always go well together. You have to make sure that the oag sits on the bottom side of the dslr or the cameras may clash.
  6. A cooled version of the 432 would be a very nice galaxy camera. 0.9”/pixel @ 2 m focal length. I like my cooled 174mm. 😄
  7. Many RGB filters have an overlap in B and G that approximately coincides with the Oiii emission line. They also have a gap between G and R that coincides with the emission spectra of Hg and Na lights. Good RGB filters can therefore reduce some of the traditional light pollution. To further reduce LP effects you can create synthetic Luminance masters from the R, G, and B masters. The advantage of RGB imaging over osc is that you can get ”cleaner” colours. But this is of course very much affected by ones processing skills.
  8. Should be available by clicking on one of the four icons/buttons below the histogram. https://pixinsight.com/doc/tools/HistogramTransformation/HistogramTransformation.html
  9. Correct Usually (in my images at least), the darkest pixel doesn't have a value equal to zero. Mind you, that is after cropping any stacking edges and after dbe (where I always check the "normalize" tick box in the corrections section). So, I bring the darkest pixel in my image close to 0. In HT there is an icon for this, on the row labeled "shadows". It's the first icon of three, next to the box that shows the clipping percentage. (Image from pixinsight docs) I then set the midtones marker to 0.25 and apply the stretch. Now the darkest pixel will be (close to) 0. I set the black point (shadows) back to 0 and apply the stretch again. I repeat this several times. In the image above, you can see that there is a narrow gap between the left hand side of the lower histogram window, with the black point/shadows marker, and the point where the histogram starts to rise. It is this gap that is closed by the black point marker being pushed to the right. This gap increases if you stretch with the dark point marker at 0, as shown in the image (compare top and bottom histograms). If you leave this gap, you essentially decrease the dynamic range of the image, because the pixel values to the left of the foot of the histogram aren't being used. Btw, don't use the automatic black point adjustment icon when you stretch a colour image, because it will mess up background neutralization. The tool calculates the lowest pixel value for each colour channel and sets a black point for each channel individually.
  10. First off: "levels" in PS is "histogram transformation (HT)" in PI. "Curves" in PS is "curves transformation (CT)" in PI. This is how I ususally stretch (and I don't use/need noise reduction). 1. Using HT, I manually bring the black point marker in to just below clipping even one single pixel. I set the middle marker to 0.25. I then apply this gentle stretch to the linear image, and repeat until the histogram peak is at about 0.1 (which is equivalent to 25 - 26 in PS speak, because PS uses a range from 0 - 255. 0.1 x 255 = 25.5). 2. I then switch to CT. I put a marker on the straight curve underneath where the peak of the histogram is (should be about 0.1), and pull it down to about 0.08 (which is about 21 in PS). I put a second marker on the curve just to the right of the histogram, and pull this up. All the time I have the preview activated. The two markers will give me a basic S-curve. I then add marker to the right to decrease the brightest areas and avoid star bloat. Usually the curve ends up straight in the upper part, from about 0.7 to 1.0 on the X-scale. Point to note here. The S-curve starts to rise after the first marker. I make sure that this rise isn't too steep where the histogram is still high (this is noise dominated in the image). I may add a marker to the left of "first" one and lift this slightly to make even the lower part of the S-curve straighter. 3. If during this procedure, bright areas start to lose local contrast, I will apply HDRMT with a lightness mask. But, and this is important, the lightness mask will have its white point dialled down to about 50% (linear curve with the white point brought down to 0.5). HDRMT is applied at about 50% strength. I adjust the number of layers (scale?) to get the detail I want. This is how I stretch luminance and mono images. For colour (rgb) images I start with either arcsinh transformation or masked stretch, because these retain colour in the stars. But I always follow up with curves transformation. And in both arcsinh or masked stretch, I never keep the default black point (clipping point). I either reset it (0) or take it down (to 1/10 of the recommended value in MS, and 0.5 of the recommended value in arcsinh). Doing this keeps the left hand side of the histogram well away from the left hand border. https://www.astrobin.com/7n0qcu/B/
  11. Not necessarily. You can create two stacks, one with all the data and one with only low fwhm data. Combine the nebulosity and background from the large stack with stars from the low fwhm stack. In pixelmath: new_image = largestack_starless + smallstack - smallstack_starless You probably won’t need a mask here.
  12. That’s most likely the cause of the uneven background. https://www.astrobin.com/221479/?q=Sh2-112 Only you can decide what to keep and enhance, and what to let go. While more data will always allow you to incorporate more of the background, that is not always a practical option. Especially if you are fighting light pollution.
  13. plus pixelmath. 😉
  14. What you do need to take into account is if you use different filters at nights with different seeing. Eg, if you were to collect, say, Red at a night with poor fwhm, and Blue at a night with good fwhm, you could end up with red star bloat. That's one reason for me to collect data from all (RGB) filters during a night. I know you have (had) problems with your filter wheel, so that may not be an option for you now.
  15. I wouldn't care too much about the (fwhm) numbers. Nebulae don't need the same level of detail as galaxies. Just gather data while the gathering is good, and fix the stars in post processing. CS
  16. Ultrafaint dwarf galaxies are seldom photogenic, but they have a fascinating physics history. These galaxies can form in the wake of galaxy mergers or galaxy collisions. And because they are so sparse and faint, they require patience and persistence from the observer.
  17. The galaxy is so faint and difficult to see because individual stars are visible. It's like identifying certain trees in a forest. You actually need a moderate focal length to be able to see the galaxy, or you won't see it for all the stars. The third (overstretched) image shows the dwarf best. Hunting ultra faint dwarf galaxies is all about stretching your gears performance and your abilities to the limit (no pun intended).
  18. Very nice! Only 260 ly distant would be well within the Milky Way. 260 Kly according to Wikipedia.
  19. I too am at Bortle 5, but without any street lights. Even with a cooled mono cmos, I struggled to lift the dust from the background at 3,5 hrs integration time. Imo, you need to at least double or tripple that time to get anywhere. Try to shield your gear from any interfering light sources. (Set up in the shadow of any street lamps if that's possible.) As long as the banding is more pronounced than contrast in the dust, you won't be able to pull that out. Optimise your exposure time. For uncooled DSLRs, the general recommendation is to put the peak of the histogram in the camera display at 1/4 of the horizontal scale. As @Clarkey noted, this time of year with warmer nights and less darkness, is not a good time to image such faint objects. Revisit the nebula when nights get longer and cooler.
  20. A very nice setup, and a very nice image. It's hard to see on my tablet, but I guess it's "ordinary" canon banding, ie the read pattern of the sensor. PixInsight has a acript to remove it. With a relatively slow scope and a light pollution filter, you don't have that much light reaching the sensor, and you may need to increase the exposure time in order to drown the read pattern of your camera. 3.5 hours isn't much and you definitely need more data to pull out the dust. Afterall, it's dark dust on a dark background. As long as the contrast in the dust is competing with the contrast in the banding, you're fighting an uphill battle.
  21. This was partly a reaction to a reply to the original post. The weak point with RPi in my experience is either the power supply not being up to the job or problems with usb cables. A few years ago, drivers were a possible cause. But these have become a lot better. For me, my eq mount and Pegasus focus cube sometimes interfere with each other. Both have similar ttl-serial chips that seem to confuse the RPi usb ports handler.
  22. I subscribe to the undulatory theory for light propagation, and leave the corpuscular theory for emission and absorption. Huygens over Newton. As for the RPi, it is quite adaquate for astrophotography automation. In fact, the youngest generation of sbc’s are quad core or hexa core; far more powerful than many a laptop used in the field. I have Ekos/Kstars on a RPi4 which controls mount, focuser, cameras and filterwheel. It guides, focuses, flips, and plate solves without any hickup, while I sit comfortably indoors.
  23. I started with a roll away tool shed (see the link in Görans reply), and this worked well. A small 3x3 or 3x4 ft kit works well and involves a minimum of diy skills. It's also less expensive than a full blown ror observatory. A roll away shed has many of the advantages of a ror, but at a lower cost. If you use it with a tripod rather than a pier, you will need to check polar alignment, and may need to recalibrate the guiding software more often, but this isn't much of an issue. The reason I went for a ror was that I wanted something more permanent and the ability to house two scopes on their pier.
  24. When it comes to stretching, both programs are not that much different. PIs Histogram Transformation is Levels in PS. Curves is the same in both. Never ever use the screen transfer function in PI as part of your permanent stretch. Like Göran, I stretch in small iterations. I start with Histogram Transformation, where I place the black point just shy of clipping even one pixel. I set the mid point to 0.25. I apply this stretch as many times as needed, usually to get the background to 0.1. Then I move to Curves, where I put a marker under the histogram peak (now at 0.1) and pull it down to 0.75 - 0.8 (edit: 0.075 - 0.08) A second marker at 0.25 - 0.35 and pull that up. This creates the basic S-curve, but will over stretch the bright areas. A third marker at 0.8 - 0.85 ensures that the bright parts won't be over stretched, and a fourth marker between nrs 2 and 3 in order to create a smooth S curve, where the upper part is almost linear. A curve like this stretches the dark areas more than the bright areas while keeping the noise down and the stars under control. On a starless image you can be more aggressive, of course.
  25. Rodd, that image is from an amateur hosting facility, which is probably located at a dark site. And even though xlong complains about light pollution, that may not even come close to what you have. Total integration time more than doubles for every magnitude you lose in darkness. If your sky has a brightness of, say, magnitude 17, and xlong has 19, you will need 6 times as much integration time. I agree with Göran; from what I've seen from you lately, you're definitely not waisting your time. So, please keep those pretty pictures coming. Btw, Göran has 21.5 Mag sky darkness. That, in combination with a double f/2 RASA, and sensitive ZWO cameras, is almost cheating 😉😉 Very nice rework btw, @gorann.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.