Jump to content

ollypenrice

Members
  • Posts

    38,263
  • Joined

  • Last visited

  • Days Won

    307

Everything posted by ollypenrice

  1. The answer really is simple in one sense: a Newtonian will give you the focal length, at reasonably fast F ratio, for a relatively low price. However, it will not be plug and play in the refractor sense of the term. If you go for an F ratio like F5 or F6 then basic collimation tools have a fighting chance of giving you good results. If you go for F4 expect a much longer battle with the instrument to get it into acceptable tune. Olly
  2. Splendid! I find the Lagoon only so-so, visually, though not as disappointing as the Eagle. My favourite around there is the Swan, which really does look Swan-like at the EP, as in the images. The Trifid is also a nice strong visual target, I think. But the Lagoon is a gorgeous, creamy target to image. Olly
  3. Interesting. I discovered that I had the virtues of both types and the vices of neither... 🤣 No bias there, then! Olly
  4. Alas, I have nothing to advertize, Marv! Our gite is closed down for now and we really don't know when that will change. Yes, we are in the south east. I did do some research on the astro situation in the south west before choosing our location but it is very clear that the SE has the best weather stats, and by no small margin. I have loved the Pyrenees since my first visit by motor bike in 1977 but, apart from Manchester, I don't think I have ever been anywhere so eternally beset by rain!! At least on the French side. As a cyclist I have toiled through murk, fog, drizzle and clag up the French side to be greeted by glorious Spanish sun on the other more times than I can remember. For all that, the Pryrenees are exquisite. Olly
  5. Be aware that focus will drift with change in temperature so a regular re-check is a good idea. I think your data look OK for a 60mm scope. The image would take more local contrast enhancement, I think, and sharpening. You could post a linear stack for us to find out... Olly
  6. Indeed it is. If you use Pixinsight just run the RGB through Dynamic Background Extraction to balance it. In Ps or similar, you can use the black point in Levels to get the top left of the histogram peak in each channel aligned to the same position. That should get you pretty close. Olly
  7. The counter argument would be that fans discourage condensation. I honestly don't know which side wins this one... Olly
  8. Andrew's right. You have to consider pixel size carefully. With 2.4 micron pixels you won't need a huge focal length to reach the roughly 1 arcsecond per pixel which is good for galaxies (other than the large M31, M33 and M101.) 500mm will get you there. Any longer and you'll be over-sampling on those tiny pixels, though you could resample downwards in processing. The field of view will soon become small at longer FL, though. The problem is that this isn't a FL for planetary and lunar imaging. I'm not sure that the task of finding a single scope which will be good for DS galaxies and solar system is possible since the focal lengths required are so different when the DS camera has such small pixels. Olly
  9. Where is this noise people are discussing? Olly
  10. The red channel was, I think, stronger than the green and blue but, like Carole, I didn't find a gradient. Olly
  11. Well that's tremendous! I have the same region from my CCD but in far more time and mine isn't at all as good. I also found the IFN to be brown which, at the time I did this (with Tom O'Donoghue) wasn't something on which there was agreement. Your rig continues to impress... (as does your use of it.) Superb. Olly
  12. The evidence is against you. When we had both, I would frequently say to our dog, 'Cachou, eat the cat!' While Cachou was absolutely clear on the matter of eating the cat's dinner (which she would do even when not at all hungry) she steadfastly failed, despite wishing to please me, to eat the cat. She couldn't bring herself to do it. She was, bless her, simply too humane... 🤣lly
  13. I have no experience of shooting in LP and wouldn't want to advise you on that. I began imaging at our dark site and that's all I've ever done. For serious narrowband, though, I think I would go for mono and individual filters. From what I have seen OSC/dual band is essentially modified OSC, not pure narrowband. Olly
  14. Hi Rashed, I'm not sure what the two contradictory points are...? (I'm not being evasive, After re-reading the thread I just don't know.) I never said that mono is four times faster, however. (At least I hope I didn't! Because it isn't.) The broadband equation is roughly thus: Luminance: All object photons. Each colour, 1/3 of all object photons. So in 4 hours LRGB you have 3+1+1+1 = 6. In RGB/OSC you have 1+1+1+1 = 4. That makes the mono advantage 6 to 4, not 4 to 1. It is not that simple, though. The OSC filters don't, in fact, cut off sharply between colours so each one does pass more than 1/3 of the signal. What adds complexity is that some targets, with extremely faint parts, are not going to yield any colour with present technology but may yield a bit of signal - which they will do best in luminance, not through colour filters. The LRGB speed advantage is not going to go away but it is target-variable. Meanwhile, back in the real world, there are stunning new OSC CMOS cameras and ingenious dual band filters for OSC which have changed the OSC/mono debate since I made the earlier points in this thread. Would I buy a modern OSC CMOS? You bet I would. I had the good fortune to be invited to post-process Yves Van den Broek's mega-mosaic of the galactic equator. Even without a dual band filter his CMOS/OSC data went deeper than my CCD in HaLRGB, though he did not find any significant OIII. (The Squid in what is otherwise his image came from my CCD data.) With a dual band filter he would have found the Squid for sure. See Gorann's RASA images on here. Yves' image with my scrap of OIII thrown in: https://www.astrobin.com/g82xf7/B/?nc=user I'm happy to clarify further any earlier points I made if I can. Olly
  15. Nothing has ever been known to go wrong, go wrong, go wrong... Olly
  16. I don't think there's anything special about it, Goran. Sometimes the noise texture just strikes me as likely to be well-handled by it so I try it. Essentially I tried it on a whim and it seemed about right and very mild. There was next to no noise in this data anyway, if you were careful not to provoke it during the stretches. The downsampling (I assume) of the data for publication left it superbly clean. I feel the trick is to stretch the backgound only to the desired brightness and then no more. Stretch above that, of course, for all it will give. Olly
  17. 'Deja vu all over again,' again! 😄lly
  18. I've been using CS3 for years with these colour settings: I recently decided to give the latest Ps a go along with Lightroom. Now Lightroom and CS3 give me the same colours but the new subscription Photoshop gives me a much redder hue. For example, the background in the screen grab above gives me a pinkish colour rather than the pale grey I see here. The colour settings on the new Ps as as below. So both are set to the same version of sRGB as it seems to me. When I post images on here on SGL they agree with the way they look in Lightroom and CS3. So does anybody know what I need to do in the subscription Ps to get it to agree with CS3 and Lightroom??? Cheers, Olly
  19. What you'll get from the filter is an approximation of the HOO palette but you'll have the blue as well. If you discard your blue channel you'll also discard a good part of your OIII. And you haven't isolated the SII at capture. (I don't know how much of it gets into the red channel at all with the filter in question. I don't think what you are aiming to do is possible because NB imaging relies on isolating the separate gasses' emissions. If you do want to re-assign colour channels in Ps it's easy. Split the channels into three greyscale images, R and G and B.. Take one of these and go to Image-Mode and convert it to RGB mode. It will still look just the same - greyscale - because all the new channels are exactly equal. But if you split the channels you can go to Merge Channels, choose RGB mode, and then put whatever channel you like in each channel. You can put your red layer into blue or whatever you like. However, you don't have narrowband data so, personally, I think you'd be far better off working with the filter as intended. Olly
  20. Two renditions, the first with a minimal application of the Ha data, the second with it used selectively but more emphatically. Lovely data, beautifully calibrated. Many thanks. RGB combined at parity in AstroArt 5 and run through DBE in Pixinsight. This was hardly necessary because the data were so clean and gradient-free but it does set the background to neutrality in the three channels. Very light application of SCNR green. (I had one small patch of blue noise which I fixed with SCNR set to Blue - a first for me! I saved it and applied it as a bottom layer in Photoshop, taking the problem patch off with the eraser.) RGB given a simple log stretch in Photoshop CS3 (Levels.) Luminance given DBE in Pixinsight (again it hardly needed it) then given an initial log stretch in Photoshop CS3 till the background was at 21. After this I pinned the background at that level and continued to stretch just above that in Curves. Initially I did a very hard stretch (which the data supported) but in the back of my mind I knew it would overwhelm the RGB - which it did! So I redid the Curve in Luminance, stopping short of what it would support, but kept the hard stretch for later. L applied to RGB in Photoshop and then the LRGB was further stretched as one in Curves with the background pinned to 21 again. I then applied my over-hard stretch as a luminance layer at low opacity to get what I could out of it. Next, lots of tiny iterations to tweak colour, reduce stars (Noel's Actions) and add contrast by pinning points in Curves and stretching just above that. I went back into PI to run LHE at one point but brought it back into CS3 to use as a Layer at partial opacity. Core sharpened as a bottom layer using Unsharp Mask with the stars excluded. Improved bits were let through using the eraser on the top layer. A few large stars were reduced using Ps Layers: Make copy layer and use circular well feathered eraser to take off top layer over and just around the star in question. Activate bottom layer, go to Curves and pin the curve at the background level just around the star. Then pull down the curve just above that. Reducing the star increases its saturation so reduce that. I cannot recommend this technique on big stars too highly. Noise reduction: I was tempted to use none at all because the data were ultra-clean and M33 has a strangely grainy, noise-like speckled look even in the Hubble image. However, I ran Ps Despeckle as a bottom layer and took off the top layer only for the faint glow between the spiral arms. The background sky and brighter parts had no NR whatever. Well done IKI! Ha had a gradient! (Moonlight for a guess?) That was removed in PI/DBE. Then stretched brutally using an aggressive Curves stretch in Ps. (Not trying to make a nice Ha image, not worried about the noise in the darker parts because they won't end up in the final blend. I just wanted to get those nice ring structures above the brightness of the red channel so I could let them into the final image.) Ha then added to red in blend Mode Lighten. The resulting image was placed as a bottom layer below the LRGB and allowed into the image where desired. (I was looking for small, delicate features absent from the LRGB. I didn't want a big red glow in the core.) I think the Ha makes an entirely different image, neither better nor worse, so I've posted both. Great fun and thanks to all involved for the data. Olly
  21. The wedges are inherently very difficult to align because they are not precision made. The result is that an adjustment in one axis causes an unwanted movement in the other. The only solution is to do it all in small, incremental adjustments and accept that it will take some time re-iterating the procedures. It helps not to loosen the locknuts too much. The closer you get to alignment the wiser it is to keep them moderately tight as you then turn the adjusters. With the amount of play in the system there is a wide gulf between what's supposed to happen and what does happen. I would also ditch the software based approaches. Just use the classical drift method. That way you won't be worrying about the software complicating things, which it usually does. The DARV method of drift alignment is a great way to get close, and possibly close enough. https://www.cloudynights.com/articles/cat/articles/darv-drift-alignment-by-robert-vice-r2760 With the drift method you are not operating through a third party (the software) but interacting directly with what matters, the sky. Olly
  22. I agree with the others. The need for different exposure times is grossly exaggerated and in over ten years of deep sky imaging, as a provider, I have used different exposure times on fewer than half a dozen images. The obvious one is M42. After that, well.... I really can't remember! Olly
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.