Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

ollypenrice

Members
  • Posts

    38,162
  • Joined

  • Last visited

  • Days Won

    305

Posts posted by ollypenrice

  1. I don't see why you couldn't move a nice little finder-guider system from the Sky Guider to a more full size mount when you upgrade. If your aim is to continue with essentially shortish FL imaging with an 80mm F6 imaging scope a finder guider will work fine for you.

    iOptron make much of the polarscope on the mount you have so do you really need a Polemaster? Once guiding, a reasonable polar alignment is all you need. It doesn't need to be perfect, especially for shortish exposures. (Around 5 minutes.)

    FLO have a number of decent-looking finder guiders like this one: https://www.firstlightoptics.com/zwo-cameras/zwo-finder-guider-asi120mm-bundle.html  I have no experience of them since I guide with Lodestars and ST80s but they should be good.

    You are asking the right questions and planning in a thoughtful way.

    Olly

  2. 7 minutes ago, Datalord said:

    No, the mount is working like a charm now that I just use normal guiding and the occasional MLTP sent to the mount. 

    IMG_20190717_132417.jpg.25cec4084dfdc6889e9da0264b823a0d.jpg

    Agree, but... Determining that point is exactly what this post is about. The above bad sub looks quite fine in the nebula on the zoomed out version, which is why I'm even contemplating getting this data in. 

     

    Do you consider the above image "slight" or worse? 

    Personally I wouldn't use the sub you descirbed as 'bad' in the first post.

    Regarding guiding, everything I've seen as a robotic host has told me that not guiding is more hassle than guiding. Seven years on, my first Mesu, guided, has still to drop a sub.

    Olly

  3. 2 hours ago, carastro said:

    I use the Synscan User Defined Object on the handset when the target is not in the synscan catalogue, i.e. a VBN or Sh2 target.  But annoyingly Skywatcher Synscan does not let you name the target, or record and save more than 1 at a time.  So I have to re-input every time I change targets, including any I have done at an earlier occasion but have since done another user object.  Rather annoying I find.   I also have to remember which target is recorded as all it will give you is the coordinates.  

    Olly, it's in the menu where you select named star, then scroll down until you find user object.  Middle button 8 if I recall without looking at the handset. 

    While I am on this subject.  I do find it a chore sometimes to find the actual coordinates of some targets - any-one got a good resource for this?  I find myself scouring the internet hoping some-one has recorded it, often its in Wikipedia, but sometimes not.  Am now keeping my own list of these targets that I have done so far and when I remember post them up when I actually image the target on my website so I have a permanent record, and also maybe it might help others.

    Carole  

    The RA and Dec co-ordinates are usually on the net if you look the object up but what I prefer to do is model the framing in my planetarium software and note the centre of that frame as I intend to image it. Once actually framing up the target outside I'll note (on good old paper) my final choice of co-ordinates. I then just have RA and Dec showing on the handset another night and slew to the target while looking at those co-ordinates as they roll past. It's so easy that I've never felt the need to change it.

    Olly

    • Like 1
  4. 19 minutes ago, kirkster501 said:

    Think: If you are imaging a nebula, say, and subsequently discover that you have star trails, the nebula will be trailing as well.......  No amount of tinkering in deconvolution or sharpening is going to change that fact.  Past a certain point, the sub is useless and you are defeating your purpose of creating a nice picture by trying force dodgy subs into your final composite.

    Absolutely.

    Was the bad stuff captured while you were fighting with the mount or are you still working on that?

    Olly

  5. 10 hours ago, Starwiz said:

    That's a little concerning.  I wasn't aware - do you have any more info?

    John

    Every now and then an update upsets the functioning of certain devices. From memory, a couple of years ago an internal change to the Windows time clock clashed with the timekeeping of the SiTech mount control system as used on some up market mounts including Plane Wave and Mesu. More recently a Win 10 update clashed with Atik camera software at just the same time as Atik were snowed out of their factory, so delaying their response. Both issues were fixed but it's just an example of why, if I can keep an item computer-free, I do so. (A good few years ago I reconfigured a PC for dual screens and this trashed my camera control. I think this was a Vista bug. The better you are with IT the less seriously these will disable you but I'm not an IT expert.) Again, it's particularly important for me as a provider to have stuff that works. At one time I had a Takahashi mount (not recommended) which could only be PC controlled and, despite religiously using the same USB port, I lived in fear of the message 'Device not recognized' when I powered it up for the evening. Sometimes it would just do this, quite randomly so far as I could see. I still get connection issues with filterwheels on occasion and, quite honestly, would happily go back to a manual one for the narrowband half of our dual rig where we never want to scroll filters.

    Olly

  6. 3 hours ago, Starwiz said:

    I've ditched the handset, so the mount's controlled from the laptop via Cartes Du Ceil and APT, also using ASCOM and EQMOD plus PHD2.

    In APT for instance, I've defined NGC7000 The Wall as a custom object.  I can then use the APT Point Craft feature to plate solve which works out where the scope is actually pointed, and then sync with CdC after which I just go to the object.  After reaching the object, Point Craft plate solves again to check, then does another move and solve if needed.

    John

     

    I like handsets. They are not on the internet and no update for WIndows 10 will ever kill them. (They can kill the camera and the filterwheel - and have done so - but the mount is internet free.)  As a provider I'm in a slightly different position from most people since this kind of setback is not an interruption to my hobby, it's a let-down for our guests so I really don't want it to happen.

    Olly

    • Like 2
  7. 6 hours ago, carastro said:

    I wouldn't use the ones where the stars are trailing.  But the rest of them could be used, barring any with huge plane tramlines.

    Carole 

    You should be able to use these. 'Remove line' in AstroArt will do some of the work on the individual sub and sigma clip will do the rest if you have enough subs in the stack. Always worth a try. The other way to do it is to stack the lot and then stack only the ones without plane trails. Give both a basic stretch (saved as an action is the easiest way), paste the full stack with trails over the reduced stack without and erase the trails. (This is where Photoshop wipes the floor with Pixinsight. 👹:D)

    Olly

  8. 3 hours ago, Starwiz said:

      Once I have decided the coordinates for the image, I can input them into APT as a custom object, so it makes imaging the same place on different nights much easier for me.

     

    I write mine down on a piece of paper. (While I prefer a quill pen I'll stoop to a biro if nothing else is to hand!)

    :Dlly

    (I can never remember how to set up user defined objects or how to find them in the handset if I do :BangHead:. )

     

    • Like 1
  9. I'm always bemused by the way plate solving seems to have become 'essential' when, like Carole, I still don't use it.

    Carole says to orientate the camera as closely as possible to its original position. This is true, but the trick is to orientate the chip along the lines of RA and Dec, either in landscape (long side along RA) or portrait (long side along Dec.) This is dead easy to do: do it by eye first of all, sighting the camera so it's parallel with the dovetail. Refine it by taking a 5 second sub while slewing slowly in just one axis. This will produce a star trail at some angle to the chip. Repeat while rotating the camera between subs till the star trail is parallel with the edge of the chip and you're done. It is very rare indeed for framing to require any other angle and, by sticking to RA and Dec, repeatability is easy to achieve. Just look at the stars around the edges of the image.

    Olly

    • Like 1
  10. I don't agree at all with the advice in the book. On our two premium mounts, Mesu 200s, we use what I consider to be long guide subs of 4 seconds. These mounts have very low and very slow PE and don't move far in 4 seconds.  When guiding our EQ sixes we guide at sub-second intervals because there is rapid PE. If we guided at five second intervals with these mounts we'd be all over the place.

    However, the guide trace itself cannot be entirely relied upon to tell the whole truth: if you go for ultra-short guide subs the mount won't get far along its error before being corrected, so you'll get a nice RMS figure but you may be chasing the seeing in doing so. However, I hope Rickwayne can find that link because I've long suspected that short subs work well. I only use long ones on the Mesus because I can and I've been prepared to accept the idea that longer subs will give a more averaged-out stellar centroid.

    To test it properly we would need to make a good set of exposures guiding on short subs, another good set guiding on long ones, then compare the FWHM values of the resulting images.

    Olly

    • Like 1
  11. 7 hours ago, geordie85 said:

    The luminance channel is where the details lie as every pixel is registering every photon. 

    If you imagine the bayer matrix is like a filter, red only let's in red light, green only let's in green light and blue only let's in blue light. So if a red photon hits the green filter, it is blocked from the sensor. 

    In truth this is not the main reason for the luminance channel containing most of the detail. It seems like a reasonable explanation but debayering algorithms are very sophisticated (some are better than others) and they use ingenious ways of working out what, say, the missing red pixel would have recorded had it existed. Imagine the curved edge of a red nebula: a red filter in a mono camera will record that curve on every pixel when an OSC will only record it on one pixel in four. However, the debayering algorithm will reconstruct that curve as a red value on all pixels. This is not quite as good but it is very good indeed. RGB from a mono camera is shot on every pixel for every colour but it still doesn't compare with luminance.

    No, the real reason the Luminance channel is powerful both for finding detail and for finding faint signal is that it passes all the light of the visible spectrum at once, which is to say about three times as much as gets through a colour filter. The LRGB system was invented, pure and simple, to save time. That's what it does. When you shoot L you are capturing vastly more light than through a colour filter so you get more signal. This will drag out the faint stuff and allow you to sharpen the details in the areas of strong signal.

    3 hours ago, wimvb said:

     

    There is a down side to LRGB imaging. If you image from a site with light pollution, the luminance data will contain that, as wel as its associated noise. RGB filters otoh, are generally designed to block the worst light pollution, caused by mercury and sodium lamps. These lamps emit light at wavelengths between red and green. By designing red and green filters that will transmit only light just above (red) or below (green) these wavelengths, RGB only can be less sensitive to light pollution than LRGB. Also green and blue filters are generally designed to overlap at the wavelength at which Oiii emits. Oiii signals are therefore "boosted" by RGB filters.

     

    Wim, I follow your reasoning but have read many times that OSC cameras are more susceptible to light pollution than mono. This from people who've tried it. I've only ever imaged at a dark site so I have no experience but this is the first time I've heard OSC praised as a light pollution buster.

    Processing an OSC image using a synthetic lum in the way one would use a real lum is sound advice but I know from experience that there is no comparison in signal strength between OSC or RGB and genuine luminance. Theory might suggest that real lum might be 3x stronger than synthetic but I think it's more like 4x stronger in my data.

    In a nutshell, Lum catches all the signal and RGB is used to distinguish between the colours. There is no need to waste photons by shooting only colour.

    Olly

    • Like 2
  12. 1 hour ago, Starwiz said:

    Olly,

    Are these techniques likely to be covered in the various tutorials on LRGB processing or is it something different?

    Thanks

    John

    There must be tutorials out there on how to do it. I think I first came across this technique from Rob Gendler. The main thing with a very strong L layer is not to add it all at once. In Ps you can add it at only a low opacity as a top layer in blend mode luminosity, boost the saturation of the bottom RGB layer, blur the L layer, flatten and repeat. This adds the L a little at a time and boosts the colour saturation as you go. On the final iteration of adding the luminance you don't blur it and all the fine detail will be restored.

    Olly

  13. 1 hour ago, Gerry Casa Christiana said:

    Love it when people go against the flow :) I have to say it's quite convincing just a quick question if I may. With the first picture if you needed to improve on it would you increase time on rgb or spend more time on luminance or h Alpha? Is there a sound ratio to colour and luminance that you follow? 

    Thanks

    Gerry

    I might add luminance or simply more RGB as a first step.

    The colour to luminance ratio which I use depends very much on the target. If we are trying to capture faint dusty or other broadband signal it's the luminance which will find it so I'l shoot lots more luminance than colour, as in this example. https://www.astrobin.com/335042/?nc=user  The luminance found the difficult tidal tail. However, this makes the processing harder because you need various techniques to stop the luminance bleaching out all the colour. If keeping the star sizes down in a nebula shot is the priority then I might shoot no luminance at all and use the RGB as a vehicle to carry OIII and Ha and give naturally coloured small stars. There's no one answer but the great thing is that the mono camera gives you the flexibility to choose your approach.

    The easiest processing comes from equal amounts of L and R and G and B.

    Olly

    • Thanks 1
  14. I'm going to argue the other way but first let's get one thing sorted out: monochrome CCD with filters is faster than both one shot colour CCD and DSLR imaging. I did this image of the Heart nebula in only two hours and processed it quickly and simply. Each colour had 20 minutes and the H alpha had 1 hour.

    spacer.png

    This compares with the same equipment on the same target done 'properly' with well over 20 hours of data and complex processing:

    spacer.png

    I do not believe any one shot colour camera in an F5 system could match the first image in 2 hours.

    In this case the speed in the first image came from the use of the Ha filter but it can also come from the luminance filter which is at least three times faster than a colour-filtered image whether that's from OSC or RGB filters. (OSC and RGB are pretty much equivalent.)

    Personally I think that using the right tool for the job is always easier than using a multi-purpose tool or the wrong tool. I went straight into astrophotography with a mono CCD and almost no computing skills at all. It is often argued and assumed that you should go via DSLR into CCD but I don't agree with this. A number of people whom I've taught on my courses have said that they found DSLR to be a blind alley. Their words, not mine.

    The big argument against CCD was cost, which is fair enough, but there are now dedicated and cooled CMOS cameras which are far cheaper than CCD and in my view they have introduced an exciting mid-cost alternative.

    Olly

     

     

     

    • Like 1
  15. The length of the tube very roughly approximates to the focal length on some designs and not at all on others. Best to stick with Steppenwolf's clear, simple and correct definition in the first reply.

    3 hours ago, kirkster501 said:

     

    Imaging time is directionally proportional to FL.  So why, with my Meade 14" and its vast light gathering power, does an exposure at F10 take so much more than my puny 2.7" refractor at F5????  The light gathering power of the 14" mirror is vastly bigger so is capturing hugely more photons.  So why isn't an exposure quicker with that?

    No it isn't. It isn't proportional in any way whatever to focal length! And it is only proportional to focal ratio in certain circumstances. Exposure time is proportional to focal ratio in camera lenses where the focal length of the lens is fixed and the diaphragm varies the aperture. The area of unobstructed aperture doubles or halves with each change of F stop. That's why exposure is proportional to F stop (because F stop is proportional to aperture in this case.)

    However, if you change the focal ratio by shortening the focal length you are not changing the aperture so you get no more or less light. You do change the number of pixels you put it on, though. So let's think about your 14 inch versus your 2.7 inch refractor. You are mistaken in thinking the refractor is faster. The Meade is, if taking the same object, far faster. However, the object is going to have to be quite small to fit on the chip in the Meade, so let's imagine a small, faint planetary which you intend to present at the same screen size as it would appear in the refractor. Firtly you'll probably be able to bin the camera 4X4 in the Meade and still get the same resolution as you get in the refractor. And then you may still be able to software bin the result and end up with an object the same size as it appears in the refractor image.

    Take an image at the same exposure in both, but unbinned in the refractor. The Meade has put about 22 times as much light onto the same number of effective pixels. The little planetary will be far, far brighter in the Meade image. What disguises this fact is that, unless you bin/software bin/resample the capture the Meade will be producing a far larger and far fainter image, but try resampling the refrator's image up to the size of the Meade's and see what it looks like! It will look like noise...

    Olly

    • Like 3
    • Thanks 1
  16. 5 minutes ago, kirkster501 said:

    Many thanks folks for you input.

    Yes Ole, aware of the focus criticality.  I am thinking of a way to autofocus the lens with a SW motor and a pulley belt of some sort. 

    TS do these devices which I found helpful.

    spacer.png

    If you do want to stop down you are not obliged to use the diaphragm. A front aperture mask will reduce the F ratio without introducing diffraction effects. Graphics outlets sell the compass-cutters.

    spacer.png

    Focusing at the intersection of the 1/3 lines is brilliant advice.

    Olly

     

    • Like 1
  17. I doubt it's temperature related but you could try a dummy sequence by day or by cloud without the cooler.

    I agree with sloz that USB is the likely culprit but I'd reinstall the drivers anyway. Edit: do you always use the same USB ports? Not to do so is asking for trouble.

    Olly

    • Thanks 1
  18. Reading your post makes me suspect that you are, maybe, concentrating on the wrong numbers. Mounts do have to carry the payload you put on them, certainly, but they also have to deliver a tracking accuracy under guiding which is at least twice as good in arcseconds as your imaging scale in arcseconds per pixel. A C8 and 6D are working at 0.66" per pixel at native FL or 1.07"PP with O.62 reducer. 0.66"PP would require you to achieve a guide RMS of about 0.3 arcseconds, which is just about possible on a premium mount and with good seeing.  1.07"PP is more reasonable and can even be supported by a good EQ6 (they vary a lot) or by a CEM from what I read. It is also worth noting that, even when the mount can deliver these accuracies in principle, the seeing may not allow you to capture real details at anything like this resolution. (I image at 0.9"PP on a Mesu 200 mount but it is common for the seeing to make imaging in Luminance pointless. By using a mono CCD I can shoot colour when the seeing is unstable and wait for a steady night for the luminance.)

    This brings us to the possibility of moving to an even longer FL SCT. Unless you could considierably increase your pixel size there would be no point in doing so because the resolutions in question will never be supported by the atmosphere in anything resembling a normal location and without a seriously premium mount. If using a mono CCD camera you could usefully make effectively bigger pixels by binning 2x2 or even 3x3. A mono CMOS could do likewise but the gain in efficiency would not be so great.

    If you are planning ahead and starting from scratch I would first set out to match local seeing, a realisitic estimate of mount tracking error under guiding, focal length and pixel size. In astrophotography these work together.  If any one of these numbers is significantly out of step with the others it will, if it is 'better' than the rest, be wasted and, if it is 'worse' than the rest, totally negate the advantages of all the others. This is a handy pixel scale calculator: http://www.12dstring.me.uk/fovcalc.php  It will give you the imaging scale of any system you choose to model.

    You mention Hyperstar. Beware! There is a lot they don't tell you and a lot of what they do tell you is, in my view, deceptive at best and downright false at worse. One thing is certain: F2 will never, ever, be 'easy.'

    Olly

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.