Jump to content

Filroden

Members
  • Posts

    1,373
  • Joined

  • Last visited

  • Days Won

    2

Posts posted by Filroden

  1. 2 hours ago, happy-kat said:

    Given rotation and everything moves around eventually I guess that's workable

    I'm just impatient. It currently puts Pacman, Iris, Veil, NAN, etc out of reach until next summer unless I carry the kit to another site (and that means figuring out battery power). I've probably got a limited window on Soul so I might try and frame it horizontally and see if I can't fix the issues I had last week. 

    Update: just checked and it looks like both Pacman and Gamma Cass are within my altitude limit for about an hour.

  2. I have two projects for the daylight hours today:

    1) Squeeze an extra 5-10 degrees of altitude out of my mount. Moving to the new camera really extended my imaging train and it has reduced how high I can image. I think I can get more altitude if I turn the camera upside down as it moves the bulk of the filter wheel away from the mount. I'm also going to see if I can move the scope further up, though its dove bar is very short. The DSLR was small enough that it could almost clear underneath the scope and allowed me to get to at least 75 altitude. Now I's scared to go near 65, which is really limiting my imaging options. My mount was definitely not designed with imaging in mind!

    2) Take an all-sky panorama to load into SkySafari so I can better estimate when targets are clearing the houses/street lights. I hadn't realised how little sky I can actually see when I bought the house. It initially looked like I could see half the sky from the patio but the height of the houses plus the three street lights leaves me a small cone from roughly NE to SE and from 20 to 60 altitude (with the ideal imaging altitude being around 40 to clear the light pollution but still stay low enough to reduce rotation) :( I could move the scope to the end of the garden and recover a little of the western sky but that relies on my first project being successful and also means I have to leave the warmth of the kitchen and sit in the garden (and means trailing an extension the length of the garden).

     

    Update: regardless of where I move the dove bar, the highest I can go without camera contact is 71 degrees. If I don't rotate the camera/filter wheel that reduces to 66 degrees. 

  3. 9 hours ago, Nigel G said:

    Last nights efforts.

    I'm not sure if I have something set wrong but I'm having serious trouble with colours, after stacking and Developing in ST my image was bright green, same as a mask green , in fact just for the hell of it I chose mask and could hardly see the mask green. To get some sort of colour is a proper task. No matter how hard I try I can't get any white stars, just blue, green, red or purple.

    I thought I'd have ago at the Veil nebula with my 210mm lens. After all the delays of last night I got around 1h 40m of 60 and 90 second subs. along with 68 dark, 30 flat and bias. ( had to create whole new batch for wifes new camera ) As the battery on the camera was running low I couldn't resist a quick snap at M45, the lens had just started to collect a little dew on so I grabbed 9 60s subs and stacked them too with 9 dark 9 flat and bias. not bad for 9 minutes exp time.

    Theres more in the first image than I was expecting, in the last 15 minutes light dew had started settling. I wonder what this would be like with a cooled mono CCD.

    veilWF-1.jpg

     

     

    Cheers

    Nige.

    Hi Nige

    This is a strange one. I can clearly see the green in the first image but the same effect is missing from the second. Did you change some of your processing steps between the two images?

    One thing that jumps out at me in the first image is that it is the cores of the stars which are green. Their halos all seem to carry the correct colours (reds, whites and yellows). It's as if the cores are over-exposed and some process you've done in StarTools is highlighting/masking the star cores and affecting just them?

    I had a play with the jpeg in both PixInsight and Photoshop. I couldn't do much with the star colours - I could reduce the green but I couldn't re-tone the stars. However, it has a great little tool that reduces star sizes and when I ran that, the Veil lifted! A little brightening in Photoshop later and ...

    NigeVeil.jpg

    That's a lot of Veil you've captured there. Nicely done. I think it's quite a tough target even with long exposures.

    One more thing springs to mind (and it's been mentioned a few times in the thread), could the colour issue be something to do with debayering and therefore to do with that initial setting in StarTools? Alternatively, is it a setting in DSS?

    • Like 2
  4. 7 minutes ago, The Admiral said:

    Better still Ken, well done! A little brighter, which brings out the nebulous bits somewhat better. Given that you've scaled it to 300dpi on A3, does that mean that it'll end up on your wall?

    Ian

    I'm thinking about it though I prefer the wispiness of M45. I know a good printer in East London but I'd like to see if I can find one locally that can handle astro images (shades of black are much harder to print!). I'm going to put it and a couple of other images onto a memory stick and test them at A4 and A3 to see how they work. Alternatively, I might see about A5 and create something with a few images (and maybe a little text) too. Another project for a cloudy night!

  5. Okay, one final version and I will stop tweaking (at least for today). I took the original image into Photoshop, cropped it and then rescaled the image to fit approx. A3 at 300dpi. I applied a little vibrance and a small S adjustment curve to enhance the contrast. I'm much happier with this than the linear stretch I applied in Lightroom which I think just clipped detail rather than enhanced contrast.

    Tiff attached.

    M33_20161002_v3.tif

  6. 3 minutes ago, Nigel G said:

    Very nice Ken, that's a vast improvement in details and great colours, I'm impressed. 

    That's probably the best image I have seen from an Alt-Az mount.  I wonder how sharp a nebula will be.

    Well done. 

    Nige.

    I can't really image the California Nebula until around 23:00 and I can only image the Soul before about 22:00 (the Heart is too big even for a two panel mosaic). The Pacman is already too high. The real test will be in a month or so when I can get my scope onto the Rosette, Flame, HH and Orion nebulae.

    The weather here is nice at the moment but the forecast is saying low cloud cover moving in around 20:00, but tomorrow is still looking promising. I have the scope still set up from last night and sat on the dining room table so it won't take me long to set up if I get another dark night.

    • Like 2
  7. 14 minutes ago, Herzy said:

    Did you have gradients to deal with? I had trouble processing this target because I couldn't differentiate between noise and small gas structure. There are so many small gas clouds in the galaxy that look like noise. I would have gradients with heavy noise on one side and smooth on the other so I couldn't get anything out of it.

    I had a broad gradient running from bottom of image to the top, plus some vignetting because I didn't use flats, which AutomaticBackgroundExtraction took care of. I also had a fairly large dust bunny just to the south of the galaxy and about 25% of its size. I modelled this carefully selecting many points in and just outside the dark area avoiding selecting anything in or near the galaxy and DynamicBackgroundExtractor took care of that after two applications.

    Otherwise, I was fairly blunt with noise removal in the colour channels, and applied MultiscaleLinearTransform noise reduction to first 5 wavelets (in decreasing strength) before stretching and then, with a lum mask, a little more MLT after stretching with some TGVDenoise thrown in. I also (with use of galaxy and star masks) applied sharpening and unsharp to the galaxy to enhance the finer details. I enhanced the contrast with some HDRMultiscaleTransform.

    I still have a lot to learn in terms of processing - my masks are still very "wild" and could be created with much more refinement. I also still haven't wrapped my head around deconvolution (though at least now I can use it) or LocalHistorgramTransformation (which I didn't use).

    P.S. Should have added, if you check the original version I posted (not the black clipped version I did in Lightroom) and zoom right in (you may have to download the image), you can see there is some very fine noise in the galaxy but nothing that detracts from the star forming regions or areas of nebula (which are much easier to see zoomed in).

    • Like 1
  8. Well, here's my first result for M33 Pinwheel Galaxy. Captured using SGPro and processed in PixInsight.

    This comprises 81 x 30s L, with 34 X 30s R, 35 x 30s G and 33 x 30s B for a total imaging time of 91.5 minutes.

    large.M33_20161002_v1.jpg

    Here's an annotated version too, as there are lots of additional fuzzies in there!

    M33_20161002_v1_annotated.jpg

    P.S. Here's a slightly tweaked version having taken it into Lightroom.

    M33_20161002_v1.jpg

    • Like 5
  9. Likewise, M33 is definitely there. My L subs are noisier than I would like (a by-product of the LP so I must either image later or select higher targets for the early evenings) but the RGB is looking nice. I had a very nasty dust bunny which took two iterations to remove (which probably means I've created other gradient issues in other parts of the image). I probably have another hour of processing time left this afternoon (pre-processing is pretty automated so I can leave it running in the background). I'm guessing this will be one of those images that I will never be happy with the "final" version and will become a feature of re-processing every cloudy night!

  10. Ok, I took flats this morning and they looked far more normal than the ones I took last week. I used the sky with a lot of paper over the scope. However, for some reason, SGPro couldn't calibrate the G filter and it took me ages to find the random combination of changes that would make it work!

    I've just calibrated and integrated my files and the flats seem to add dust bunnies, not remove them! So attempt two now in progress without flats to see what the difference is like.

    I did learn a few more things last night. My USB cable from laptop to camera is long enough to reach the scope from the kitchen but as M33 rose higher and moved to the east I must have reached it's limit at some stage as my subs started to show double stars or trails. Thankfully I caught it quite quickly.

    Also, my mount does love to slip, so I do need to visually inspect each image before accepting it into the stack (I'm "cheating" and using a batch process for calibration/integration so I no longer have PixInsight analyse each sub for quality). Combined, the effects of both these meant I lost about 10 minutes of subs (and I still accepted some subs which showed borderline trailing).

    I also probably started imaging M33 too early. The first 30 mins of subs show significantly more gradient than the final hour when it was much higher. I guess my imaging sweet spot is between 40 and 60 altitude. That said, I will have to wait and see what PixInsight can do with the gradient. Given M33 occupies much less of the field than M45 or IC1848, it will be easier to identify and remove the gradient without destroying signal.

    At one point I tried to increase exposure time to 60s but I think this coincided with the USB cable getting tight so I discounted the result thinking I was seeing trailing when in fact it may have just been tension from the cable. Something for me to test next time, as it will be much easier to process 100 60s images than 200 30s images (and also reduce the total file size from over 7Gb to 3.5Gb!).

  11. 4 minutes ago, The Admiral said:

    For one who can't do late nights, very welcome!

    I couldn't image last night, but M33 is on my list to do. I am looking forward to seeing what you get in 2 hours. I'm also keen to see what you can conjure up on a red emitting nebula.

    By the way, if memory serves me correctly, weren't you in Somerset not long ago, or have I got that totally wrong?

    Ian

    If I can get another early start I will give the Soul Nebula a proper run as it's still low enough before 22:00. Tuesday is looking good on the forecast.

    Yes, I lived in Somerset until August. I moved North so my skies are very different. I have a much more restricted view, only seeing from NNW to SSE with street lights to the NE. However, I think I have less light pollution now and can see the Milky Way which was impossible where I was.

  12. Isn't it great to be imaging at 20:00? My mount played nice tonight and wifi connected. I figured how to speed up the frame/focus module in SGPro (it does capture images of less than 1 second plus binning 4x4 really helped the speed) so alignment was a breeze.

    I had tried to improve the spacing on the camera to the flattener given I'd just thrown it together on Thursday. That caused the first big hiccup, with my first images showing all sorts of weird! Reverted to the original spacing which I think is too long but at least my stars are now round and point like, rather than something Monet would have been proud.

    So, given it's so early I decided to give M33 a try. It's small on my sensor so it gives me a lot of latitude for cropping later. However, the forecast shows clouds could come at any time :(

    • Like 3
  13. 18 minutes ago, SilverAstro said:

    Thanks I'll go check them out.

    Meanwhile I had been thinking about your earlier dilemmas about a gradient in your otherwise very nice M45, I had also noticed it and that it was "warmer" red-ish than the predominant blue of the nebula and that the nebula looked like it was being buried in it so, as an experiment only ( I was hindered by only having an 8bit jpg and that my fingers and everything were xxed for Nige ! ) I decomposed it into R G and B and severely attacked the R, - with medium gaussian filter on the G and B channels,  and yes, there is quite a bit of nebula down in the bottom left and right diagonals

    Thank you for taking a look. I think you're right and there is still gradient in my image. I think I will give it another go tomorrow and really spend some time on gradient removal, particularly in the red channel. However, some of that blue could still be light pollution. I have three LED street lights around the garden and the town has LED in most streets so it will seep into all three channels.

  14. 9 minutes ago, The Admiral said:

    Thanks SA, but it was more trying to avoid messing about with laptops and all the wires rather than mono vs colour. That's what I like about dslr's, you have a single package that you attach to the scope and away you go.

    Ian

    I think someone will sort out a combination including the Raspberry PI, running capture software, some attached SSD storage and a small screen. Until then, you can attach the Raspberry PI but you have to remote connect with a computer - at which point you're doing what I'm doing but with one more piece of kit!

    I've just managed to wifi connect my laptop to the mount which means once aligned I think I can control the mount and goto from the laptop, removing the need to keep the iPad. I'm going to trial Nebulosity as it runs on a Mac. If I like it, then I could have simplified my own capture process (though SGPro does make taking flats a piece of cake).

    One more benefit of the camera - its higher resolution made getting a sharp focus trivial. I've never seen such a clear diffraction pattern from the Bahtinov mask before.

  15. 1 hour ago, The Admiral said:

    How do you find the noise levels with the ASI? It certainly seems to have taken the community by storm. I wish though that they could produce a self-contained unit, like a dslr, but with such a device as the ASI, and a processor loaded with firmware which will do the astrophotography essentials. Clearly, though, it'd need a cable for cooling.

    For me it was effectively a one-for-one replacement for the DSLR as I was already tethering the camera to the laptop to allow me to sequence images (and with a very long USB cable so I could do it from the warmth of the house!). The filter wheel connects to and is powered by the camera so the only additional lead I now have to manage is the power supply to the cooler. The filter wheel is fully ASCOM compliant so I just set up my sequence and let it run for 30 minutes, in which time it's captured 30 L and 10 each of RGB. (I've not yet really tested the different focal points of the four filters so I'm probably introducing some blur if they are not par focal). I can then re-centre the object (I find that helps over time - the mount can track but it eventually starts to go wayward) and then repeat the 30 minute routine. Plenty of time to sit and watch TV!

    So far the noise levels have been brilliant. It was so much easier to process the images, even with such limited total integration. In fact, I could probably have skipped noise reduction entirely and still had a cleaner image than a much longer DSLR image with noise reduction!

    One of the biggest immediately noticeable difference is in the colour noise. The bayer matrix on all OSC DSLR really does introduce a lot of colour noise. Particularly for us given it's constantly rotating so each pixel over time swaps from picking up R G G then B. I found my backgrounds looked like ants had crawled through the picture with paint on their feet! What noise I now see is an order of magnitude less noticeable.

    So for me, I've gain a couple of huge benefits from the swap:

    - I'm now imaging at the full resolution of the chip, with every pixel equally sensitive to all colours (including deeper into red) compared to the DSLR where only 1/4 of the pixels would read red and even then it was only 25% efficient at the Ha band.

    - the cooling removes much of the noise and its very low read noise means I can capture signal in much shorter subs.

    - I get more natural colours when I combine the RGB and it's much easier to balance.

    - I don't have to worry about mirror vibrations, so I've speeded up my capture rate (6 seconds pauses between images is a lot when the sub length is only 30s to begin with).

    There are some negatives:

    - because of rotation and the way I'm currently sequencing, my final filter subs (B) are much rotated from the initial subs (L) - I can probably address this in two ways a) ignore it, 60 subs with OSC go through the same degree of rotation so I would be cropping anyway and b ) rather than take 30 L then 10 each of RGB I can take 10 sequences of 3 L and 1 each of RGB - I think a) is the right way for me to think about it but I will probably do b ) because it means I have some colour data if clouds come over.

    - it takes a little longer to process because I have to integrate four sets of subs, crop each of them (trivial), remove gradient from each (though I could remove gradient from RGB after combination) then work on L and RGB through noise reduction and stretching before they are finally combined into a single LRGB image for final processing. That said, I've become faster with each try so other than the initial integration, the processing feels like it's taking similar times.

    - I need more storage - each channel takes 32Mb.

    Given we are limited in exposure, having a much more sensitive camera with lower read noise and lower noise overall, really helps reduce some of the gap we suffer from not being able to do guided long exposure subs on an EQ mount. Once I get to grips with SGPro and can align the scope quicker, I think I'll be able to capture much higher quality subs than with my DSLR. Though I think Nige has it right with modding the camera to make it more red sensitive - though even with 100% QE on red, you're still only using 1/4 of the pixels to capture it.

    P.S. I also realised why my IC1848 had star trails - it was at a very high altitude so 30s was probably too long given it was more to the NE than E.

    • Like 4
  16. Look's like I might have a clear night on Sunday. Any thoughts on which of these might be good targets?

    The California Nebula would need to be a two panel mosaic so I think that might be one too many complications for me to attempt so soon with the new camera. Has anyone tried for the nebula/clusters in Auriga?

    NGC1931 and IC410.jpg

    California Nebula.jpg

    Flaming Star Nebula.jpg

    P.S. I've just found out that you can double click your images when creating your post and reduce their sizes!

    • Like 1
  17. 5 hours ago, The Admiral said:

    To be honest Ken, my first reaction was that if it had been my image, I'd have probably made the background darker. But then, on looking closer I reckoned that actually the top LH corner looked OK, but also came to the conclusion that the rest of was lightened due to nebulosity. The prospect of gradients hadn't entered my head. Gradients? Nebulosity? Your call! Your resulting image does look good though, but it would be a pity if you removed some of its character.

    Ian

    PS. Having looked here, you are probably right though! http://www.messier-objects.com/messier-45-pleiades/

    PPS. But then, looking here I'm not so sure! http://dso-browser.com/picture/view/1033/deep_sky/pleiades/M/45/bright-nebula/by-reptux?0=&from=dso&dso_id=859

     

    I still think there is a slight bottom to top gradient. The darkest points at the bottom give readings of over 20 in R and 40 in B, whereas at the top they go to almost zero. However, I'm not convinced it's all gradient and some of it is nebulosity so I don't want to clip it any further. I typically like to not clip shadows. Although it means noise is more visible (let's be honest, we clip because our images tend to be noisier!) the eye is great at detecting pattern out of noise. Even though signal might be too low to enhance effectively with our processing tools, the eye can still do that final stretch. So I'd rather see it and spend hours debating whether it is or isn't signal than not see it at all.

    As a for instance, if you were to make an equilateral triangle, using Alycone and Merope as the base, with the point directly down, then I'm convinced there is a cross of dark nebula (close to the left of HD 23632 if you have a star atlas). Now I know there is dark nebula in the area but am I just imaging detail into my noise? That's why I love astronomy...you can dream!

  18. I noticed my image of M45 had a gradient from the bottom (so probably light pollution). I didn't want to reprocess the image from the start (typically gradient removal is the first processing step) so I've done some crude post-processing of the image to try and minimise it. Not sure whether I'm trading gradient removal from the background for increased noise in the main image?

    Still, for 15 mins of lum and 5 mins each for RGB, I'm going to call this one done until I can collect more data.large.M45_20160929_v2.jpg

    • Like 2
  19. 6 minutes ago, Nigel G said:

    Sometimes you just need to walk away, I sympathised with Ken and his problems last night, tonight its all going Pete tong . 

    First my camera,  now the pc just decided to go to sleep when I started using my astro cam, by the way tif, fit or jpg is ok for DSS.

    I'm going to observe tonight, I don't often get the chance to just look, so I'm going to make the most I can of the clear sky's. 

    Huff

    Nige ??

    Even though I had hours of frustration last night, the final result made up for it.

    Here's hoping you get to see something spectacular! And aren't eyes the ultimate alt/az mounted scopes?

    • Like 2
  20. 14 minutes ago, The Admiral said:

    That's a super image of M45 Ken, I love the nebulosity, which looks even better on my main PC.

    Indeed and thank you. I think it loses something for being converted into a lower quality jpeg for the web but the full Tiff image on a large screen is breath taking.

    I think my PixInsight processing is not too dissimilar to the StarTools processes you describe. I still have to master masking as it's vital to being able to pinpoint which elements of the image to affect. I'm also learning how to use it's preview feature which allows me to apply processes to smaller parts of the image (reduces processing time) so I can tweak the settings until I find the result I like. I could probably push the data harder but I think I need to collect more before I really push the processing.

    I think what's impressed me most with this new imaging setup is that I'm finally happy with star colours. It seems to be much easier to balance the colours when you can work on each of them individually. Also the fact that SGPro makes capturing the data so automated. Once I've resolved the issue with my wifi connection to the scope and I can re-centre the scope every 5-10 minutes, I should be set for some longer imaging sessions.

    Next on my "to learn" list is whether binning the colour data improves things. In theory, I trade resolution (not so important for the colour channels) for improved sensitivity so I can take shorter colour subs to achieve the same effect (and therefore allow more time to collect luminosity, improving image quality/sharpness).

    • Like 1
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.