Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

wimvb

Members
  • Posts

    8,813
  • Joined

  • Last visited

  • Days Won

    5

Everything posted by wimvb

  1. Great first image. Just keep doing what you are doing, and things will get easier after a while. Once you have the data, you can always reprocess as you learn the software. To get focus right, I recommend a Bahtinov mask. It's cheap, easy to use and speeds up this part a lot. For polar alignment you can use the routines in the HEQ5 Pro hand controller (Synscan). Roughly polar align your mount (level, set alt to your latitude and use compass to point az to north) Then do a 2-star alignment. Use a barlow and a short fl eyepiece for accuracy. Then do the synscan polar alignment routine. (Just watch carefully in what direction the stars move when the mount wanders off.) Then repeat the 2-star alignment. Repeat until satisfied. I find this routine gets me close enough for photography. Good luck
  2. +1 on the need to use raw files. There are several good programs to view and process raw files. All astro imaging software can handle raw images directly, and if you wish to process single frames or your daytime images, you can use RawTherapee (free) or something similar.
  3. In this case it's not the total exposure time (integration time or time on target) that is important, but the single sub exposure time. If you want to catch weak detail, you need to let the system collect photons. Weaker target = less photons per time unit. The total integration time can only help to decrease noise (to a certain limit), it will never increase the signal. Assuming that you use a DSLR, with a target like this you would use a low ISO setting which gives you more dynamic range, combined with longer subexposures. Subexposure time is limited by either light pollution or the stars becoming over exposed. To get the noise down, you need lots of subs. In processing, you need a tool to remove the light pollution. My favourite is PixInsight DBE, but Photoshop GradientXterminator probably works as well. Good luck,
  4. In response to your needs, Gina, Pleiades Software have released an upgrade for PixInsight. It now makes better use of multiple core processors, speeding up the stacking process. But I found out that it also sucks all the power out if other software that is running. So you need a dedicated computer for PI. No more surfing or playing minesweeper while PI runs BPP. Happy new year,
  5. Four actually: two on the primary mirror, and two on the secondary (strictly, you don't have to touch the third) As for the laser collimator. It has its benefits if combined with a barlow, if there is a center mark on the primary. http://garyseronik.com/a-beginners-guide-to-collimation/ http://garyseronik.com/collimation-tools-what-you-need-what-you-dont/#more-165 (check further down, at option #5; also the link to Nils Olof Carlin's article)
  6. As far as I know, when you cut a cone in a slanting direction, you get an ellipse (= conic section) the size and shape of the secondary. As you, I also don't understand the diagram. And since the diagram should show the view through the focuser, 1 and 2 can't be the focuser. As for centering or not centering the secondary, I follow this text for collimation: http://garyseronik.com/a-beginners-guide-to-collimation/
  7. Why not? This seems counterintuitive.
  8. That's a "spikey" m45. And I love the detail you got in melotte 15.
  9. First light for my guiding solution, taken last night. Changes since my previous entry in this thread: - AZ EQ6 replaces EQ3 PRO - added SW ST80 with ASI120MM for guiding Guiding software: lin_guider on raspberry pi, under ubuntu mate What should have been 15 x 15 minutes subframes, was cut in half by ice forming on the guidescope. The dew heater (held in place by the tape you can see in the image) is clearly not up to the job. Cheers,
  10. Glad to be of help. I hope you get your boiler fixed soon. Cheers,
  11. Read pattern "noise" is handled by bias, but hot or warm pixels are not, since they need the time to accumulate. My understanding is that a read pattern in bias frames is most likely very close in signal strength to electronic noise (= random). Therefore, using a lot of bias frames will bing the noise level down and reveal the read pattern. In dark frames, the read pattern is going to be calibrated out by the master bias. But a variation in sensitivity between pixels can also show up as a fixed pattern. Because of that, it won't be calibrated/stacked out. It is similar to truly hot pixels. If this is the case, no number of dark frames will correct this. I agree with you about the number of calibration frames. I also don't take dark flat frames (or should that be flat dark frames? ) What has me puzzled here is the line Pixel combination .......... disabled In the image integration dialog, combination is either average, median, maximum or minimum. Never seen this line. I can't even reproduce it
  12. Why would you need so many darks? Each light frame is calibrated with a master dark. The master dark is the integration of all dark frames. The noise in the master dark will add to the noise in each light frame during calibration, according to noise_cal^2 = noise_uncal^2 + noise_md^2 where noise_cal is the noise in the calibrated light frame, noise_uncal is the noise in the uncalibrated light frame, and noise_md is the noise of the master dark. As long as the noise in the master dark is substantially less than in the uncalibrated light frame, it shouldn't decrease the quality much. IMO, since the noise scales down as SQRT(number of frames), the master dark noise should be substantially lower than single light frame noise, even with relatively few dark frames. 500 dark frames for a master dark seems excessive. BTW, the above argument is for random shot noise, not for fixed pattern "noise", which can not be integrated out in a master dark, no matter how many dark frames are used in creating it. Or am I missing something (again)??
  13. Nice build. Be careful though, even if it looks like illumination is spread evenly across the surface, it may very well vary. It would be a shame if you find out the hard way that your flats aren't flat. If you can measure the light intensity, you should try to verify that it really is flat. A simple photoresistor stuck in a small tube and attached to an ohmmeter may be all it takes, since you'd be only interested in variations, not absolute values. Verifying by taking an image may not work if the optical system suffers from vignetting. Just an idea.
  14. That looks better, just a hint left in the right side. You can always have a look at the raw images. Maybe it shows up there as well. Anyway, your image is another proof that the EQ3, with some TLC, can deliver. The stars are nice and round, and plenty of them. You also have good detail in the nebula. I like your image.
  15. Nice! But there is a line running horizontal across the image. Do you know the cause of this?
  16. Or you could just calculate that in when you collimate. That should work if you have a laser collimator.
  17. If there are no optical parts involved, you could try teflon or silicone spray. Clean afterwards. I wouldn't use it with optics (other than an old filter as per my previous post). Just a thought
  18. I guess you may have solved this problem by now, but here's a method I use to unscrew filteradapter rings. I screw a filter in the ring I want to unscrew. This enables me to get a firmer grip and not bend the rings. It's then much easier to unscrew the two thin rings from each other. Hope this can be of some use
  19. Is it a full moon again? I hadn't even noticed with all the clouds we're having. Nice to see that you got some imaging time
  20. Here's one of my first ever astro images, M81 and M82 At one time I managed to get only slight star trailing at 300 secs unguided on my EQ3 Pro, with aluminium tripod. Later I learned that this is supposed to be impossible, so I couldn't repeat it. Looking at this image now, I realize I should re-process the data. 7 x 300 seconds at ISO 800 unguided SkyWatcher 150PDS on EQ3 Pro Camera: Pentax K20D not astromodified
  21. Still, a very nice image. I think it suffers from vignetting, which can be remedied by some DBE. Sorry to hear about the usb port. Hope you get it worked out.
  22. Same here, it started out nice, but then clouds moved in. It's typical for this time of the year: warm sunny days with evaporation that will turn into clouds when the air cools down in the evening. Didn't even bother to haul my gear out.
  23. As for minimum temperature, there may well be a practical limit. Electronic components have a temperature dependence, and may very well go out of specs for too low temperatures. So even if the cooling can go lower, the electronics may set a limit. That limit can be - 30 C. There is no contradiction, the camera cools to - 40 C below ambient, down to - 30 C.
  24. Haven't read all of your posts in detail. But it struck me that if you want to measure camera characteristics, PixInsight has scripts for that. (Of course.) What you are after might just be somewhere among all the tools...
  25. Gina, when using different exposure times, do you adjust gain as well? My understanding is that lower gain = more dynamic range = deeper images. At unity gain full well = bit depth 12 bit = 4095. Otoh, how would one utilise more than 12 bit when the output is limited to 12 bit? I'm confused about this.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.