Jump to content

Stargazers Lounge Uses Cookies

Like most websites, SGL uses cookies in order to deliver a secure, personalised service, to provide social media functions and to analyse our traffic. Continued use of SGL indicates your acceptance of our cookie policy.

  • Announcements

    sgl_imaging_challenge_banner_lunar.thumb.jpg.ef4882eb5fb3610f8a68e5e6913de0e3.jpg

wimvb

Advanced Members
  • Content count

    3,224
  • Joined

  • Last visited

Community Reputation

1,914 Excellent

6 Followers

About wimvb

  • Rank
    White Dwarf
  • Birthday 29/07/63

Contact Methods

  • Website URL
    http://wimvberlo.blogspot.se/

Profile Information

  • Gender
    Male
  • Location
    Sweden (59.47° North)
  1. Trying to kick some life in this thread. Here's my most recent capture (still a WIP): NGC 7331 & fuzzies scope: Skywatcher 150PDS (of course) with Baader coma corrector camera: ZWO ASI174MM-Cool. mount: Skywatcher AZ-EQ6 control and capture software: INDI Ekos/Kstars no guiding image stats: Lum: 140 x 20 s at -20 C, Gain = 300, collected 18 and 19 October RGB: ca 90 x 20 s at -20 C, Gain = 300, collected 18 and 19 October About 50 darks, and flats. No bias frames but darkflats Very basic process in PixInsight (dbe, colour calibration, stretch and LRGB combination followed by unsharpmask) The camera was in a different orientation the second night, so I had to crop the edges more than I wanted.
  2. First Proper Light QHY168C

    That's the good thing with data; you can always reprocess and learn new tricks. One trick I've learnt is to extract the luminance from an rgb image, and process that more aggressively. Then use lrgb combination to recombine it with the original, stretched rgb image. This works like a colour booster. I'm sure it's in the book somewhere. Good luck
  3. I read about the possible loss of sensitivity in the qhy forum, a while back. It was measured in a side by side comparative test, but not a major issue, I believe. Unfortunately I can't find the reference at the moment.
  4. Yes, as in scale down, not crop (cut). 50% downsampled basically takes the average of 4 (2x2) pixels into one new pixel. Your image will be half the size, length wise, and a quarter in file size, but you also lose detail.
  5. Another qhy camera, "only" 24 Mpixels. Same sensor as in D600, according to qhy. Modern cmos cameras are basically cooled dslrs, without automatic white balance.
  6. I believe that qhy will release mono versions of their larger chip cameras. But they may have debayered osc sensors to do so*. The astro market is most likely too small for Sony to be bothered. A disadvantage (other than the minor inconvenience of having to invest in larger optics) of these cameras is that you need lots of subs, especially under light polluted skies. Integration time for good results is still several hours. At 36 mpixel*2 (2 bytes per pixel, even with a 14 bit adc) you need a very powerful computer just to stack. If you do 4.5 hours in 1 minute subs, that's quite a few Gigabytes on one night. (Even Yves' 145 subs that went into M31 is quite a large number. Integration time of 12 hours?) The time it takes to calibrate, stack and process these images adds up fast. Btw Yves, according to qhy the sensor in the qhy367c is the same as in the d810a. * when trying this, they lost the microlenses in the process, thus lowering the sensitivity. And got rid of minor diffraction effects as a bonus.
  7. I've seen a few nice images coming from Chile. I believe ESO has an obsy there. And they do mono only, afaIk. (But don't let Olly know. He may wish to relocate there)
  8. Brought my first scope!

    Since you have a barlow with your scope, you can use this method (link to pdf document): http://www.micosmos.com/enlaces/collimation_with_a_Barlowed_Laser.pdf I don't know how the result compares to other methods, bit it's very easy, and your laser doesn't have to be collimated. AfaIk, the primary mirror of your scope should have a center spot. To center the secondary, I use a simple collimation cap, which is the end cap of a 32 mm drain pipe with a small hole. In star tests, the image I get for inside & outside defocus is always symmetrical and looks the same.
  9. Under a dark sky, OSC works great. Visitors at your establishment, Olly, have proven this. But with light pollution, mono rocks. So, that takes us back to @ollypenrice's question: where was this image taken?
  10. That's very impressive for only 4.5 hours. (Actually, I find it impressive for any integration time.) Dark skies really help.
  11. My usual method is to save the project (excluding previews) while I am processing. This saves the complete history of your process, and is very powerful. Unfortunately, if you go between pixinsight and any other program, the history is lost. When I save the process, all images (and masks) are also saved, so there's no need to save images separately. But beware, projects can become HUGE. When I want to post an image I make a clone in PI, resample this, add a signature and then save it as jpeg. Then I remove the clone before I save the project.
  12. xisf can be 32 bit floating point accuracy. This means that all pixelvalues are interpreted as a number between 0 (minimum) and 1 (maximum), irrespective of bit depth of a camera. But this takes 4 bytes for every pixel value (assuming mono images). AfaIk, tiff is 16 bit integer, from 0 (minimum) to 65536 (maximum). It uses 2 bytes per pixel. So as long as images are not compressed, xisf files are twice the size of tiff files. As long as you stay within pixinsight, xisf is the preferred format. To save disk space, you can save the image as 16 bit integer xisf. For unstacked images this makes sense, but for stacked images, I would use the larger format.
  13. M33

    Sorry, I couldn't see your signature when I responded to your post. Of course, your camera already has the filterwheel. Then it would make sense to have it sealed all the way, after you've installed the filters.
  14. Have a look here. Imo the best first intro to PixInsight. it shows a very basic workflow, presented without unneccessary distractions. Sure, there's stuff that could be refined, but that would make it more difficult to follow. Whatever you learn, try to not just copy someone else's settings, be critical to the outcome. Btw, try to avoid applying the stf as a permanent stretch. Using ht's midtone and blackpoint sliders gives you much better control. Especially if you have a small target in a dark background, stf can be too aggressive. It's like using automatic exposure control on your camera when doing ap. You'll get an image, but it won't be pretty.
×