Jump to content

gorann

Members
  • Posts

    5,733
  • Joined

  • Last visited

  • Days Won

    21

Everything posted by gorann

  1. Great, that was what I was thinking - aligning the channels could fix it. I even think it increased the overall impression of the sky. However, something may now have happened to the red in the galaxy, but maybe it is just my screen.
  2. Yes, certainly stunning! I am a bit curious about the apparent blue/red miss-alignment or cromatic aberration that I see when I look closer at the star field. Do you know what may cause it - the reducer? It does not spoil the overall impression and I am sorry for pixel peeping.......
  3. Just for the record, there is one DSLR that would give you a good pixel ratio: Sony A7S. It is a mirrorless DSLR (so rather a DSL) with a full frame (24x36mm) 12 Mpix sensor with 8.32 µm pixels. Your C 9.25 with 0.65x reducer would give a pixel ratio of 1.16"/pixel. When it came out it was considered a sensation in low-light sensitivity due to its large pixels and low noise. The Mk1 version is now about 5 years old but still quite expensive second hand - I just bought one in mint condition (985 exposures) for 900 Euro on ebay to use on my SCTs. One drawback with it is a relatively low Ha sensitivity due to a filter in front of the chip so right now mine is with JTW Astronomy in Amsterdam to be modded (for full Ha sensitivity). I do have chilled CCD and CMOS cameras so this is kind of an experiment. Regarding less expensive DSLRs I have very good experience with the Canon 60D - when it came out (maybe 10 years ago) it was noted for a very low noise and Canon even made an astroversion of it (60Da) with full Ha sensitivity. Pixel size is 4.31 µm so you will be oversampling with it in your system but it will certainly work.
  4. Yes, you are right Vlad, 1"/pix would only apply on a few rare very good nights at my site with SQM 21.0 - 21.6 and variable seeing. Usually my guiding (which vary from 0.4" to 1"/pix dependent on seeing) will tell me.
  5. I have been reading/skimming through this now rather long thread and I may be wrong but I think a conclusion could be: - Use a focal reducer to get your whole object into the field of view. - Use a focal reducer to get a better match between resolution and pixel size (i.e close to 1"/pix for most of us) and thereby win in S/N ratio. - Do not use a focal reducer if you have your whole object in the field of view and you are around 1"/pix with scope and camera as it is. Right?
  6. I gave it a quick and, some may say dirty, treatment with curves, layers and the brush tool in PS. First I stretched the part of the image containing the outer shell, then I used the brush tool to only apply the stretch to the nebula (and the little galaxy). After that I also used a curve to bring the brightest parts of the whole image down (including the core of the nebula). All this would probably work better on the original image than on the 8 bit jpg that I downloaded.
  7. Not sure what cameras you have in stock Rodd but I found I can solve the microlensing problem I sometimes have with my ASI1600 on bright stars by replacing it with a bit of ASI071 (OSC) data to the bright star. For bright stars you just need a few minutes of data to do this.
  8. Nice! A new one to me too, at least as an object on its own. Regarding star processing, I am not sure it would help to process them separately - you still have to put them in at the end and need to shrink them before that (but maybe - I have never tried it). If you use PI you could try this method by Adam Block (again, I have not got around to try it myself yet): https://pixinsight.com/forum/index.php?topic=13567.0 By the way, what caused that vertical line to the right? Looks too broad to be a satellite.
  9. I really like your effort into helping people get into this great hobby! Just two comments. A scientific one is that the need for red light sensitivity of our cameras has nothing to do with the red shift you show in your cartoon. Red shift is something affecting far far away galaxies but the reason we need cameras sensitive in red is that our milky way nebulas often send out much of their light from glowing hydrogen atoms at the red H-alpha wavelength of 656 nm. Secondly, I think you may be slightly wrong in suggesting that CMOS is noisier than CCD, I thought it is often quite the opposite, but others may have more to say about that.
  10. I have done the same misstake a few times. So I now try to make it a rule to spend just a few minutes on short exposures if I aim at bright objects like this one. It takes no time at all and is a great help against the frustration of looking at blown out cores.
  11. Thanks Dave! Got to spend a few more nights on it to even get close to your level on the Veils - not sure I have the stamina😉
  12. I used two scopes. For the HaOiii data (which shows the nebula - hardly a trace of it in RGB) I used the Esprit 150 with ASI1600MM so 0.75 "/pix. For the RGB I used the Esprit 100 with ASI071 (OSC) so 1.79 "/pix. Before merging them I scaled the RGB image to 0.75"/pix (which made a huge file). I then cropped down in two stages as seen. I also downsampled the jpg images here to about 70% of the original. I had another look at my 10-30s core data and found more color and structure in there so here is an updated version with more of a cat's eye in the center.
  13. Thanks Alan! The 1050 mm FL of the Esprit 150 is relatively OK for this one, although I am tempted to use my Meade 14" next time - but then I need a really steady sky.
  14. Thanks Carole! I noticed the burnt out core in the first 3 minute subs so I took some 10s and 30 s exposures, maybe 8 minutes in total, and that saved it.
  15. Surprisingly it stayed clear all night so I got almost 14 hours from my dual Esprit rig on this enigmatic planetary nebula. There was a full mon of course but I still took the chance of collecting some RGB with the Esprit 100 and ASI071. I only expected that I at best could use it for star colour but the data turned out to be virtually without gradients. It probably helped that the scope was pointing away from the moon. I also decided to go for rather short subs so I ended up waiting for my computer to stack 263 x 90s of RGB data. On the Esprit 150 the ASI 1600MM was collecting 2 hours of Ha (Baader 3.5 nm) and 5 hours of Oiii, (Baader 8.5 nm) yielding another 200 or so subs. I also shot some 10s and 30s subs to save the very bright core of this nebula - the actual eye. The result of shooting the sky with the Esprit 100 was a quite wide field with a rather small nebula, so I post it from wide to narrow here. Suggestions and comments most welcome! Cheers Göran
  16. Maybe, and I forgot to say it is a really excellent image Rodd!
  17. Could that really be a galaxy? I assume it would have to be very bright to shine through that dust. There is a fuzzy thing in the same spot in this APOD image: https://apod.nasa.gov/apod/ap190911.html
  18. I agree with everyone - really beautiful images! They set a very high standard👍. I was imaging with virtually the same set up last night (Esprits 150 and 100 on Mesu200) and fortunately I was aiming at a different part of the sky. Otherwise the pressure now would have been too much😱 although I could always blame seeing.....
  19. Aha, I get it. I assume that what you can do is identify the first image after a flip (it takes some time to flip and align so it should be pretty obvious from the time on the file) and then have a quick look at it if it has been rotated. If you cannot see a star pattern I assume your processing program have a simple function that let you look at a temporary stretch (like the Equalize function in PS). If it is rotated then that presumably goes for the rest of the images that night so you have to rotate them all. After that you should be able to use the same flats and darks for all images (unless you also have some of those rotated) Good Luck!
  20. Now I am confused. What is a flipped flat and why would anyone flip a flat? If the camera has not be rotaded the flats would be the same before and after a meridean flip.
  21. Maybe it is different in the program you use, but in PI image calibration (against darks, bias and flats to remove dust bunnies) is done first in a separate step from star alignment (which will rotate images that are upside down after a flip). After a flip it is only the image of the sky that is turned, not the chip in relation to the optical train, so the same darks, bias and flats apply before and after a flip,
  22. This is usually no problem at all if you use Pixinsight for the stacking (which I do before I start processing in PS). PI will automatically recognize 180° turned images during star alignment of the subs. What you do before star aligning your subs is the calibration of the "unturned" subs using a master dark, master flat, master bias, or whatever you want to use for the calibration (it is important that you do this on unturned subs so the chip is oriented the same way as in your calibration frames). So work flow is: Calibrate your images as they are now (so unturned) -> Star align them (called Image Registration in PI and they are automatically turned by PI in the same direction as the reference image that you have picked in your stack) -> Stack them into one image (called Image Integration in PI). If you have OSC data then you have to do debayering after calibration but before star alignment.
  23. A totally stunning wall Ole Alexander! Got a HST feel to it, Congratulations!
  24. Yes, very promising comeback. Maybe also increase colour saturation a tad for even more punch.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.