Jump to content

ollypenrice

Members
  • Posts

    38,261
  • Joined

  • Last visited

  • Days Won

    307

Everything posted by ollypenrice

  1. This practice has a long history on SGL and began when the software Registar appeared on the scene. It will resize, align and combine data from any source, any pixel scale, any filter and any orientation. It will also combine them, but it is often better to move to a layers-based program in order to control the weighting and colour channel to be given to each component. There are now other programs which can do some or all of this, though I still use Registar followed by Photoshop, myself. A long time ago we saw an SGL-based collaboration of this kind take an APOD. As well as adding to the sheer volume of data, the technique can allow for 'areas of interest' in an image to be shot at higher resolution than the full background image. This obviates the need to shoot object-free starfields at the same resolution as the objects within it. This tends to be called 'composite imaging' and is widely used, though the writers of Pixinsight (known as The Spanish Inquisition in my house🤣) dismiss it as 'painting' and won't support it. The technique cannot be fully automated since some craftsmanship is involved in the blending but, since all the data are genuine representations of what's there, I don't agree with the Inquisitors on this! Below we have an image from two setups, the Triplet galaxies being shot at nearly 4x the resolution of the main field. To do this as a full resolution mosaic would have taken hundreds of hours. So your hunch was a good one... Olly
  2. Indeed, the Wales is a much-missed unit. There is also The Isle of Wight, though it has a rival in The Manhattan. Both are widely used in popular descriptions of the Neutron Star. By the way, the neutron star has a density of of six .................S (insert politician of choice, but not on here!) 😁lly
  3. Now you've started us off: This well, with a surface area of 17 flatfish, is over 2 redwoods old and 30 snorkels deep. When full it can store 14 giraffes of water in a volume of 9 elephants and with a peak transparency of almost three jellyfish... Olly
  4. Unless you plug in the pixel dimensions, the simulator can only tell you how the target will be framed on the chip. This is not without interest, of course, because if it won't fit you'll need to do a mosaic. However, it doesn't tell you anything more than that about the suitability of the setup or the final size of the object image. I think your decision is wise. I began by spending a lot of time and effort trying to set up a Meade LX200 for imaging but never managed to get a workable image from it. People do manage it but I knew it wasn't for me. Olly
  5. You must have some funny spheres in the Netherlands! lly
  6. I see a national paper has given the dimensions of an impacting asteroid as 'half the size of a giraffe.' Is the giraffe now a unit of volume for impacting bodies? Or for volumes in general? With 0.3 giraffes of cargo space the new Ford Transit will be popular with roofing contractors... It is believed that a new pulsating variable has been discovered in Camelopardarlis. Varying between 3x10^9 and 7x10^9 giraffes it has a period of around 8 days... Wanted: mini-digger and driver to dig a hole of 1 giraffe in our garden. Our beloved Lanky passed away in her sleep yesterday. Olly
  7. My best ever was an elderly TeleVue Genesis (4 inch F5 refractor) with 2 inch widefield eyepiece (I think a TV 35mm Panoptic at that time.) With an OIII or UHC filter it could show the Rosette or the entire Veil, etc. I now have a 70mm Pronto for these tasks but the larger aperture of the Genesis was worth having. Olly
  8. You're assuming, here, that 'wasted real estate' applies only to sky outside the object which you will crop and discard. I think this is erroneous: it is also a waste of real estate to use four pixels to collect the same information as might be perfectly well collected on two. Worse, not only does it waste real estate but it adds noise. Olly
  9. I think we might as well get the terminology right at the outset because it does help to get things straight and avoid confusion. Focal length (and nothing else) determines the size of the object's image as it is projected onto the camera chip. Longer FL = larger projected image. It does not necessarily mean a larger final image. We'll come to that. The objects you will photograph are bigger than any picture you'll make of them so there is no 'magnification' anywhere in this story! See resolution, later. On a given chip FL also determines the size of your field of view but chip size can vary, increasing or decreasing your field of view. Chip size of itself has no effect whatever on the size of the object on your final image. The small chip imposes a natural 'crop' but a larger chip can be cropped to give the small framing of the smaller chip. Resolution has nothing whatever to do with pixel count and the term 'crop factor' is meaningless in AP. Resolution is determined by a combination of focal length and pixel size. It is measured in arcseconds per pixel, meaning how big a piece of sky lands on each pixel? The same resolution can arise from a short focal length with small pixels or a long focal length with large pixels. So resolution (exclusively) determines the size of the object in the final image. Put more pixels under the telescope's projected image and you'll get a bigger image on your screen when viewed full size (1 camera pixel for 1 screen pixel.) Photographic speed is not determined by focal ratio though folks frequently insist that it is. Speed (exposure time) is determined by how much light gets onto each pixel. A slow F ratio scope used with large pixels can put exactly the same amount of light onto each pixel as a fast F ratio with small pixels. We can make our pixels 'bigger' by binning them 2x2 or 3x3, etc. The take away from this is that, by binning your F10 scope's data in hardware (CCD) or software (CMOS) you can speed it up. However, imaging at high resolution is less tolerant of error than imaging at low resolution and an alt-az mount will produce massive errors. I would say, Try your scope by all means but don't throw any money at it. Don't buy a reducer, don't buy a wedge, just try it as is. If you get the bug you will be very unlikely indeed to want to stick with the SCT for deep sky imaging and just as unlikely to find a buyer for an imaging-fitted SCT. Olly
  10. Good grief, that's astonishing. I can hardly believe it's been ignored in this way. Hearty congratulations. Image of the year for me! Bravo. (I have done a multi-panel of the Heart, Soul, SH2-202 and VDB14 /15 but, unfortunately, this object lies just outside that one's frame. I'm sure there will be more renditions of this coming online now but this is beautifully done as well as being exciting for its content. Olly
  11. I don't intentionally produce images for mobile phones because I hate mobile phones 🤣 so I haven't tried this, but what happens if you resize to a much lower pixel count? You don't need a large pixel count for a small screen display so I'd expect the downsized image to suffer less compression. Olly
  12. Hi Anne, I don't think that's a grumpy statement, it's just good sense. However, in defence of this thread, its content was determined by the question with which it began. We could, and perhaps should, have a thread on 'getting the best out of what you have.' In my view this would not be a thread dominated by discussions of gear or methods of acquisition but by image processing. Although imaging gear isn't cheap, it has never been as affordable as it is now, nor has it ever been better. And capture is a mechanical process which can be learned and brought quite quickly to a standard which cannot be effectively improved upon. Processing, however, is something which can be improved upon indefinitely. And it can be taken in different directions, allowing the same dataset to reveal different aspects of the objects imaged. In a nutshell we are 'guiding limited,' and 'seeing limited,' etc etc but, above all, I think we are 'processing limited.' Olly
  13. Nice job on a glorious galaxy. Olly
  14. Hi Adrian, Yes, I guessed you'd used star removal (and very deftly.) Did you resize the blue stars to match the NB stars? The basic problem with HaOIIIB strikes me as being the mismatch in star size between blue and narrowband. Olly
  15. Lots to enjoy but the hotspots on the nebula are a mystery... Olly
  16. I sympathize with the problem and can only tell you my own closest approach to a solution. I sometimes start with Kepple and Sanner https://tavcso.hu/en/product/TMB05P Their volumes are pretty comprehensive and let you trawl through a constellation while looking for potential targets.. I note these down on that long-forgotten flat, white stuff or, failing that, a roll of papyrus and return to my PC screen to look them up, see how they will frame, etc. Olly
  17. Yes, though I tend not to include this argument in my advocacy of mono because the crux lies in the red highlight above. The statement is perfectly correct but I'm prepared to admit that the interpolation is so good that it all but eliminates the loss of resolution in question. It requires a good debayering algorithm to restore the missing information and they are not all equal, but an enormous amount of research has gone into the interpolation procedures by which a very informed estimate is used to restore lost information. In reality I don't think we can claim that a four-pixel RGGB pixel group has only a quarter the resolution of the same pixel group unfiltered. (By 'In reality' I mean in the context of amateur astrophotography rather than in photometric measurements for interpretation at pixel scale.) Olly
  18. Really lovely and with the tiniest of powdery stars. Ha-OIII-Blue is an interesting idea which I've never tried. The blue is reflection broadband, of course, so only a BB filter will do. I remember Ian King posting an Ha-OIII-Blue image more than ten years ago and have never got round to having a go. I dare say modern star-removal software would make the combining much easier. Very interesting! Olly
  19. Again, though, you are assuming that OSC must be faster than mono. Here's a quick sum, over-simplified by the fact that the green passband on OSC chips extends beyond green because it is intended to act as a partial luminance panel: Hour 1: OSC gets (effectively) 15 mins red, 15 mins blue, 30 mins green. Mono gets 60 mins red. Hour 2: OSC gets (effectively) 15 mins red, 15 mins blue, 30 mins green. Mono gets 60 mins green. Hour 3: OSC gets (effectively) 15 mins red, 15 mins blue, 30 mins green. Mono gets 60 mins red. Call it a draw so far. Now... Hour 4: OSC gets (effectively) 15 mins red, 15 mins blue, 30 mins green. Mono in luminance gets 60 mins red, 60 mins green, 60 mins blue. (Hour 5 can reapeat hour 4, increasing the mono advantage.) In one luminance hour, the mono has captured about three times the signal of the OSC. The luminance hour does not differentiate between colours but it does not need to do so. This LRGB system was invented in order to save time precisely because its inventors realized that there was no need for colour differentiation across the full exposure time. Shooting Ha: it matters not a jot what size your pixels are or how many you have. On two equivalent chips, one OSC and one mono, you obtain four times the signal with a mono. There's no way round that. But, the modern dual and tri-band filters for OSC fight back by capturing two or three bandwidths of NB at the same time. They don't do it on all pixels, though, and the filters are not of comparable quality to the best dedicated NB filters. I'm not arguing against OSC here, I'm only arguing against what I think is a common misconception about LRGB. Olly
  20. A 40% fall-off in signal is a lot. However, I have a 23% vignetting fall-off on one setup and it does calibrate out. What's still to be explained, tough, is why the flats don't calibrate out the patterns formed on the sensor by debayering. I have to ask this: would you not be better off sticking with a standard sensor and using a dual or tri-band filter? Olly
  21. But but they are smaller pixels. But but but they are more sensitive than old CCD ones. But but but but, we must compare like with like. 🤣lly
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.