Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

Stub Mandrel

Members
  • Posts

    10,662
  • Joined

  • Last visited

  • Days Won

    32

Everything posted by Stub Mandrel

  1. EQ3 needs careful setting up to give its best and a more rigid (or modified) tripod, but if you get up to 60 second unguided subs you can add guiding and easily achieve 5 minutes.
  2. If you scrunch your eyes enough up the shade difference between the two squares disappears.
  3. There are some very effective actions for removing horizontal banding on canon images.
  4. Get as much data as you can, you should be able to do 30-60 second exposures with a Star Adventurer. Look at @Crackabarrel's image of Cygnus, you can see NGG7000/North America Nebula is the brightest area of nebulosity in a big relatively bright area.
  5. It's either a hamburger or a humpback whale. Excuse... it is a new phone with the camera(s) in a different place...
  6. One of the best nebula targets for a DSLR and medium tele lens.
  7. Rather like CNC, I am a trifle concerned about making myself a redundant part of the process. I use Sharpcap, which isn't really a sequencing program at all. I would insist on Shapcap's level of camera and accessory control and ease of use... The main thing I would want from a sequencer program? Aside from the obvious - take X subs Y-seconds using Z-filter of this target then... with autofocus and platesolve. For me it is recovering from guiding failures. The greatest weakness of PHD2 is that if the star is lost it typically sends the mount off on a little adventure, rather than staying put waiting for the star to reappear. Quite why it does this, I cannot fathom. My solution is to stop guiding, recenter using CDC and wait for the guide star to reappear and restart guiding. So my 'ideal' sequencer would include guiding and detect the mount moving more than a tiny amount off target, recentre and stay put until the guide star was recovered, then start a new frame. It would also have some very easy ways to input mount/sky area limits. For example, at home a meridian flip is pointless except when right up near the zenith. Also useful would be a 'focus alert' for manually focused scopes telling me it needs checking but not stopping everything. I have noticed that SharpCap is really very stable in its camera connections, but PHD2 can lose the guide camera and need a restart or the guide camera unplugging before it will recognise it again (despite Sharpcap being able to find it). I suppose what I hope for is that Sharpcap will add in a basic sequencer and perhaps guiding as well...
  8. Last week - after sunset, but before civil dark - does that count as night? Dark enough to do a three-star alignment with Vega, Altair and Deneb 🙂
  9. Well done, good start! Get it up to an hour's data - and take advantage of cooler evenings!
  10. I agree the narrowband images are too green, especially in the background, the iris image has a magenta cast. Both look better on my laptop, but don't stand viewing on my PC. I'm familiar with SNCR (it's available as a PS plugin called HLVG), but I find that it takes the life out of 'foreground' greens if I use it too early. I'm also tempted by HSO rather than SHO, so images are mostly red and blue, with gold where there is sulphur.
  11. Lovely star colour. There's a narrowband version on the cover of Charles Bracken's Astrophotography Sky Atlas. After careful study of yours and his images, I'm sure his is mirrored left/right!
  12. I suspect much of the complexity of PI is in the interface rather than the actual processes; the choices are vast but the number that are actually effective is pretty small. PS can do hideously complex things, there are actions (not written by me) that use many dozens of steps to achieve great results. Both programs have a fundamental weakness in offering you curves to manipulate that are tiny, forcing you to resort to entering or adjusting numbers. The best thing about PhotoPaint is that its graphs are approximately twice the size.
  13. Here's one that did seem to work well in Pixinsight, but it clearly wasn't a suitable subject for background neutralisation or gradient removal... Instead, I just tweaked the colour balance in PS and ran a round of star reduction, then denoised in Astra Image.
  14. Today I revisited some of my initial attempts at Pixinsight while I was away in Photoshop. A possible disadvantage for PI was that I processed on my laptop, whose screen though good, is rather different and fussy about the angle it is viewed. I found some very consistent differences: PI seemed to really flog the data hard, pulling out faint nebulosity but at the expense of poor control over stars which bloat and poor control over noise. I suspect this is why star masks are so critical in PI (I haven't mastered them yet). It may also be why many suggest noise reduction on the linear data before combining it in PI. PS gave much more natural, less noisy results, but with less faint detail. I felt difference was more subtle and marked than simply PI stretching more aggressively than PS/DSS, but has to do with differences in curve shape with PI (and Sharpcap's preview) using a simple curve while DSS and PS encourage a non-linear S-shaped stretch. PS has more sophisticated control over colour balance, although PI brings out faint and subtle colour better, especially when Oiii or Sii signals are weak. In all three images, I ended up using a blend of the PI and PS images with PI mostly contributing colour and lifting faint nebulosity, while PS mostly contributed luminosity for much tighter stars and better definition in brighter nebulosity. Noise reduction in PS is easier to use, but the jury is out on which is more effective. One area I found a huge difference was that GradientXterminator is not just easier to use than DBE but gave consistently much, much smoother and better results. It also did a more accurate job of balancing background colour - I actually had to use it on some of the PI images whose backgrounds were severely mottled no matter how I applied DBE before they could provide an acceptable colour layer. It's clear I have a lot to learn about getting the best out of PI, especially about controlling stretches and noise reduction. It seems using screen transfer function and transferring it to the histogram stretch is a crude way to proceed, but good for finding what is in the image. I also need to understand star masking and reduction. DBE seems hobbled by the default approach and using bigger control areas seems to work better but even then placement is critical, and it seems small variations can introduce artefact gradients that spoil the background. I don't think it's fair to do side by side comparisons yet, but here are the first three 'best of each' images I've done.
  15. Both my brother's and my house are given a Bortle 5 by CO. I haven't seen his sky under astro-dark without the moon but before astro dark and without the moon it was almost as good as the best I have ever seen it here, just hints of the milky way. Most of his local street lights go off at 12 and they are incredibly well-directed LED lights. The light in front of his house was shining between the two houses straight on my scope and I didn't even notice it for a few nights. The bit I could see lit up was not as bright as Jupiter, and the level of lighting on street doesn't seem much brighter than fullmoon, much better than here.
  16. I had completely forgotten that! I think that the thread is 0.7mm pitch as it appears to be the same as the T-thread. If 0.7 doesn't fit, try making it 0.2mm undersize on diameter - or cropping the crest of the thread by that amount..
  17. Thanks, but I really stuggle with video tutorials. I find step by steps like the Light Vortex ones bewtter, I use them like a reference book as I have a non-linear brain. Back home at last, time to process the data with PS, compare the results and identify weak and strong points.
  18. Last night's data, thrashed.... I will return to these when I get home, I'll process in PS, compare and see what's inexperience with PI and what's deficiency in the data. You can see some of the colour is badly posterised.
  19. Well the most useful thing about those tutorials is they tell you WHY not just WHAT which makes the lessons learned transferrable. I'm still to get into masks but this Iris from a couple of nights ago doesn't seem too bad!
  20. Thanks, The lIght Vortex tutorials look like what I've been searching for.
  21. Some good points! I didn't explore other calibration frames. I use bias for DSLR and dark flats for CMOS images. The difference between my RGB and narrowband filters is really noticeable. These images both use the same G-channel data. The first one uses a flat made with the Ha filter and the second with the R filter. Incidentally my L filter, a Baader, is pretty close for focus to the narrowband filters, rather than the RGB ones. I've noticed this problem several times before but never realised the cause. It may actually be that the amount of vignetting is different between filters, rather than a focus issue. ZWO Green image stacked with Baader Ha flat: ZWO green image stacked with ZWO Red flat:
  22. In my experience, few things cause as much confusion and odd problems as flats, the extra set of frames you take to compensate for any defects in your imaging train like dust, vignetting or optical oddities. Flats are images that capture these 'defects' without any otehr content, and when applied to stacked images they can make the most ugly of shadows or dust doughnuts disappear like magic. This post was inspired by a presentation by @Whistlin Bob that covered the subject really well and made me realise why I was getting a particular defect in some of my images! Flats are simply a set of images made using a plain background and exposed so as no part of the image is under or over exposed (so the histogram should be near the middle). With a DSLR the 'Av' mode will do the job reliably every time, with a CCD or COS camera you need to pay attention to the histogram display. The target can be any smoothly illuminated surface, I have used teh following, all with success: A plain painted wall, illuminated by diffuse light.* A white t-shirt over the end of the scope pointed at the sky (away from the sun). An LED tracing panel.* The sky (as long as it hasn't got any clouds as these will focus and affect the background). For the marked sources* I found it best to rotate the scope (or source) to avoid any gradient affecting the flat. Witha DSLR I take 16 to 24 flats. With a dedicated astro camera, I take 64, vbeacuse it is easy to do so. You can combine flats, DSS does this automatically the first time you use a set, then you can delete the originals and just use the master flat. SharpCap has a special routine which takes flats for you. It always checks 'apply automatically' - I always make sure this is unchecked as you end up with (wrong) flats being unexpectedly applied to future sessions. It also allows you to apply a different flat (if soem dust appears during a session or something moves). Which leads up to 'when should I make flats'? Some people make them for every session. This is the 'gold standard' and is what i do when working with Sharpcap as it only takes a few minutes to get a good 'sky flat' while you are waiting for darkness. Otehrwsise, it's possible to make flats before or after your session with the key thing being that you make no (or as little as possible) change to the setup. If, like me, you leave camera and scope set up and are meticulous about cleaning dust from sensors and filters, then a set of flats could last you several sessions, even a month! But sometimes it becaomes apparent that dust has got in and you need a new set of flats or master flat. I've learned that it is easiest in the long run to get theat master flat at the start of each session (or at the end if you use something like a panel). Flats taken the following day are fine - but only if nothing changes. The problem is moving scopes around can dislodge dust. Optical changes. Many people create a flat for each filter they use. The presentation made me realise that filters do have an effect. I have a filter wheel and usually make my flats using the L filter, which is parfoical with my (Baader) narrowband filters. For a good while I have seen odd, circular artefacts on my (ZWO) RGB filtered images and it's finally dawned on me why. For my last few sessions I created a master flat using the Ha filter, which worked perfectly on all my narrowband images. The RGB ones showed a dark,circular artefact near the top left. As everything was still set up, I created a new flat using the red filter and an evenly illumineted blanket. This worked perfectly. The lesson is that flats can work for multiple filters but only if they as parfocal, or nearly so. I hope this is useful for those beginning with imaging and helps you get your head around how and when to use flats faster than I did!
  23. I generated a lot of data last night, mostly RGB and my my flats were with the Ha filter... so I need to take a new set of flats before finishing thiose images. The one narrowband image was the Crescent. I stacked in DSS then did a very basic STF stretch and combine in Pixinsight, background neutralised and then tried to use a mask for a final curves. Boy, it's a foul user experience! Here's my first result, which looks OK but feels like what Pixinsight wanted to give me and that I have far less control than usual. I'm sure there are ways and means. Can anypone point me at good step by step tuorials (not video ones, as even the simplest leave so much out or unexplained). I'm afrioad I went o Astra Image for NR... and I think it needs more curves/star reduction etc...
  24. Your examples are all pretty much like the Baader example. This suggests you need to increase the spacing. Is it the Baader MPCC - this is much fussier about spacing than the skywatcher one. It's not clear but there appears to be some tilt or the collimation may be out as well. Try adding a 1mm spacer (card will do for a trial) and recollimating and see if it improves. Once you get the spacing right you can get a permanent spacer (or look on amazon they have some spacer sets that are cheap).
  25. This is only my second mosaic, so it's been a bit of a struggle, especially as poor fits left the western part with a quite noticeable gradient. To my surprise the Sii signal was almost as strong as the Ha, so I used selective colour to make the non-Oiii parts a bit redder, they wanted to be yellowy brown. The TIFF version of this is 137Mb!!
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.