Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

msacco

Members
  • Posts

    568
  • Joined

  • Last visited

Posts posted by msacco

  1. 4 hours ago, wimvb said:

    Many people use it as a guide camera. Either the standard, wide body version, or the new mini.

    Do you have a local skywatcher representative? Maybe they can help with a camera adapter.

    For a cpc1100 you should also consider an off axis guider. But beware of the hyperstar. From what I've heard and read, they can be a real challenge to set up. Any telescope system that is fast (lower F-number) will also be difficult to manage. Focus, squareness and collimation become much more critical if the system is faster.

    What is important is pixel scale; how large area of the sky falls on each pixel. For that you muliply pixel size in micrometer with 206 and divide by focal length in mm. For your current setup

    6.54x206/1000 =1.35 arcseconds per pixel. If you decrease the "image size" by resampling, you will end up with a pixel scale of 2.7 arcsecs/pixel (combining 2x2 pixels into 1). You may start to lose detail in your image. If you want to have smaller files, but keep the level of detail, you need fewer pixels, or a smaller sensor. Eg, my camera has 1900x1200 pixels, 5.86 micrometer in size. The sensor is only about 11.6x7 mm, and file size is 4.6 Mbyte. But it also covers a much smaller area of the sky, while each pixel covers 1.17 arcsec, at 100 mm focal length. This means that the level of detail for your and my camera is almost the same, but my smaller sensor covers less area, and results in smaller files. Having a large sensor costs.

    That's strange, because the script  works on the entire image. I can post the Pixinsight  process container tomorrow, so you can have a look at how I did it.

    There are only 2 telescope shops total, and they're both extremely expensive, so they probably won't be able to help me, I'm still thinking between a guidescope and an OAG. With my current set up I don't think getting an OAG is even possible because of the extra weight, but with the CPC1100 and wedge I could probably easily achieve that, so maybe it's worth investing into that, on the one hand I want to get as much things as I can that will also be relevant to the new gear, so I will be able to take it up to its full potential, but on the other hand I'm still not 100% sure exactly what I'd need etc until I actually receive it, kinda confusing.

    I believe the CPC1100 will be very challenging for me at the beginning, and I might get frustrated at first. But I know that I'll be able to learn and overcome it with time :)

    If you could post the process container that would be really useful, as for me that didn't really go well for a reason, at least not to the whole image. ^^

    Thanks!

  2. 26 minutes ago, Danjc said:

    Ah fair enough.......I’m not sure mate I would just have a good old internet search. 

    Yeah tried that, but I can't really find anything related to "Sky-Watcher 9x50 Finder to C Adapter" on either aliexpress/ebay/amazon.

    Guess I'll keep trying, or maybe look into a diy adapter.

  3. 4 minutes ago, Danjc said:

     

    I’m guessing you are referring to the ASI120 Mini if so I find it fine and have managed 600s subs with that and the SW 9x50 finder. 

    Im sure This is the adapter you require. It’s a tight fit so patience is key. 

    Thanks! Since The shipping from that site is rather expensive, do you know where I can get it from aliexpress/ebay maybe?

    Is this the correct item? https://www.aliexpress.com/item/32675339177.html?spm=a2g0o.productlist.0.0.153d484fd43Wle&algo_pvid=936beef2-28d0-495c-a204-bac8c7b6d42f&algo_expid=936beef2-28d0-495c-a204-bac8c7b6d42f-2&btsid=a05aacdc-3bb9-4682-87de-e81aeedebcd6&ws_ab_test=searchweb0_0,searchweb201602_7,searchweb201603_52

    Maybe I'll get something a bit more reliable, but is this the correct direction?

    Thanks :)

  4. 21 hours ago, wimvb said:

    You should definitely invest in a guide scope and camera. A finder guider with an ASI120 (mini) isn't that expensive, and should get you subs with more than a minute exposure time. Regardless which gear you have (other than very short fl camera lenses), guiding will allow you to gather more data / sub.

    Yours is a 20 Mpixel camera with a 36 x24 mm sensor. Many dslrs have aps-c size sensors, and dedicated astro cameras may even have smaller (mine is 2.3 Mpixels). Smaller images (file size) means faster stacking.

    What gear are you looking at? If it's another Newtonian, you will need a coma corrector. Especially with your large dslr sensor.

    CBR is really simple. The script was developed to remove read noise bands from Canon images. But it can clean up any image that has horizontal lines or bands. I use it at default settings with highlight protection activated. But it only works when the lines or bands are truly horizontal. If they are vertical you have to rotate the image (process>geometry>fastrotation) before you apply the script.

    For noise reduction I used multiscale linear transforms kappa-sigma noise thresholding, with its built in linear mask (amplification 750). Use it before stretching, first on a preview to test it. I adjusted the kappa value such that the diffraction spikes in the very brightest star of your image kept their sharpness.

    Only lights and bias. I used the cosmetic correction function in batch preprocessor in stead of darks (with automatic detection activated).

    I'm currently considering which camera for guiding I should get, the thing is I'm not really familiar with how good some equipment will be, and I don't really like buying something that I'll need to replace a few months later.

    So the question is, will the ASI120 be sufficient as a guide camera? Obviously, the more I invest, I'll get a better camera, which will result in better tracking, but I'm not really into going too crazy as I have other things as well to consider.

    Also, is there any way to install the camera onto the skywatcher 9x50 finderscope?

    As for the image size, how much difference does the image size makes? Would scaling down the images harm the quality by much? Would you recommend doing that?

    I'll probably get in 2-4 months a second hand CPC1100 + wedge + hyperstar, this will be used for both visually and astrophotography, so a coma corrector is not necessary here, and I'm not sure it's worth investing in that for only a few months, I might be thinking of getting a new AP camera later on, the color and cooled ASI294 seems like a decent choice between getting something decent and not spending fortunes on it I believe.

    As for the CBR, I tried that, and that really did an amazing job! But I didn't manage to really get it through the whole image, I tried rotating, but left pretty much with a cross which received the CBR, and the rest of the image which remained the same.

    Is there anything else I'm missing here?

    Thanks a lot for the explanations! :)

  5. On 03/08/2019 at 09:56, wimvb said:

    Regarding the banding:

    They are a result of the camera read noise showing through, and indicate under exposure. If they also show in your calibrated darks, you could test stacking without darks altogether. Use cosmetic correction to remove excessive hot pixels. 

    The CBR script will only remove horizontal bands. Quick rotate your image 90 degrees, use the script, turn back. If the bands aren't exactly horizontal, the script won't work properly. 

     

    On 03/08/2019 at 21:59, wimvb said:

    I've also downloaded your data set and am playing around with your image. Impressions so far:

    • You have a full fram camera, and I'm not sure that the scope will illuminate the whole chip.
    • Do you use a coma corrector? Even that would need to illuminate the whole chip. Vignetting can only be corrected up to a certain level.
    • Calibration frames (darks, bias, flats) can never decrease the random noise in images. Only averaging during stacking can do that.
    • The integration process is so slow because you have such large images (20 Mp). You can try increasing the memory settings in PI, but in all honestly, I don't think that will improve things much.
    • Your unstretched master doesn't show any stars. This and the vertical banding are tell tale signs of under exposure. The bands are the read pattern of the sensor.

    If you want to fully utilise the dynamic range of the camera, you need to increase the exposure time. At a dark site, even at ISO 1000, you should be able to use an exposure time of several minutes. A general rule for DSLR imaging is to have the peak of the histogram at 1/4 to 1/3 from the left edge of the display. But at a truly dark site, that may be impossible. You should at least have the brightest stars at full intensity. If you want to keep star colour even in the brightest stars, you can decrease the exposure somewhat once you've determined which exposure starts to blow out star cores.

    Calibration images should be gray scale. Bias, darks and flats are never deBayered, because they correct the light frames pixel by pixel. If these calibration images were deBayered, their pixel values would change.

    Thanks for the tips!

    I'll reply to a few of the notes here:

    I'm not using a coma corrector, thought about buying one, but I might be upgrading my gear in the following 2-3 months, so I'm not sure it's worth spending money on that.

    You're saying that my images are large, is that not common for AP? Usually the average image size is smaller?

    The issue with having longer exposure for me is that my gear is really far from being suitable for AP, I have skywatcher 200p with EQ5 goto, that is not an equipment that's used for AP usually, the DSLR gives more weight into the equation.

    I believe the limited time I can use exposure for without my image being ruined is around 30 seconds~.

    Maybe with my new gear there will be some improvements, I also need to invest into some kind of guidescope/something else that will help me get better tracking, but until then, I probably won't be able to use more than 30 seconds :)

    Hopefully the new gear will give me the ability to get much better photos :]

    Thanks.

     

    On 04/08/2019 at 02:16, wimvb said:

    Here's my attempt. All 87 good lights were used, no darks. Processed in PixInsight.

    m31_87.thumb.jpg.68a0f2fb7ba740e339a3b062a46add4b.jpg

    Wow! These are some amazing results! Can you try explaining a bit more about the CanonBanding reduction (with rotations)? The image looks really nice! And also there is so much less noise.

    I'm so curious now :) Also, you said that you didn't use the darks, did you use the bias or you used only the lights?

  6. 2 minutes ago, Scooot said:

    It is quite slow. Calibrating the lights will take a while as will stacking them.

    There’s a batch process which I believe is quicker but I’ve never used it.

    I've mostly heard that it's not as good to use the batch process. I'll just try and see what I can do to improve it, thanks, and sorry for all the dumb questions :)

  7. 1 minute ago, Scooot said:

    Oh I see. No nothing else. Flats will help with the vignetting.

    Ok, thanks a lot!!!

    One more question(sorry 😶) - I'm trying the workflow with my pixinsight, and it's so incredibly slow....I really don't know why.

    I don't have an amazing PC, but that should be at least decent, I have i5 3470 OC to 3.6 ghz, 12 GB RAM, fast SSD. I mean, I don't expect it to fly, but each process such as integration, registration, etc simply takes around 10-15 minutes.

    So I'm spending like 2-3 hours on the stacking process, and that simply feels like it doesn't go anywhere.

    Is that the same to you as well?

  8. 2 minutes ago, Scooot said:

    Thermal noise is usually generated by the continuous use of the camera, but it would be worse in hot weather. One of the disadvantages of DSLRs although some people modify them to cool them during use. Btw I’m not saying it is thermal noise, just that darks  wouldn’t remove it if it is.

    I understand, are there any frames that should handle noise then?

  9. 1 minute ago, Scooot said:

    Darks wouldn’t remove thermal noise. Was it hot?

    Are you referring to the weather or the camera?

    The weather was very good, nice chill with short shirt(short shirt is funny..), and comfortable with long shirt.

    As for the camera, I'm not really sure, haven't felt it, but it didn't feel very hot, though I can't really determine.

  10. 13 minutes ago, Scooot said:

    Now I know your camera no, up to ISO 1600/3200 should be fine. Have a look here. http://dslr-astrophotography.com/iso-values-canon-cameras/

    If you had great weather maybe it is just noise showing up because I've stretched the image too much for the data. I don't know I'm afraid. Maybe someone else can help.

    Thanks for posting your data, it was good to have a go with some data from a dark sky area. :)  

    Well it is noise I believe, but shouldn't the bias and darks help with that? Maybe flats would've been very helpful here as well?

    Anyways, thanks a lot for all the help! And if it was fun for you processing this, you should really try processing other things from people in my country, they're really amazing.

    • Like 1
  11. 18 minutes ago, Scooot said:

    Oh yes I see what you mean, some sort of vertical streaking. I don't know how to remove that. It's not on the individual subs but there is a lot of differing noise pattern on each one so I guess this is the result of stacking it. The differing noise pattern, if it is that is why I wondered whether you were imaging through a thin layer of cloud. Or perhaps your ISO was too high? What camera are you using?

    This is a screenshot of my Dark Integration. I've used the screentransfer tool to view it otherwise it would look totally black.  

    37165016_Photo29-07-2019200950.thumb.jpg.143cb0ebaa3d2fcd08c856d0e838be2f.jpg

    Yes that is what I was referring to. I don't think there were clouds really, the weather was really amazing, being in one of the darkest spots in my country as well, no wind, no dew, no clouds(at least not in any weather app I checked or visible to the eye..).

    Anyways, I'm using canon eos 6D, it's really possible that the ISO was too high, I don't really know, this is pretty much the first image I've taken(I also took M8, but it was like total 15 mins exposure from my backyard).

    Would you recommend me trying to use lower ISO next time?

  12. 15 minutes ago, Scooot said:

    Not sure what noise strips you can see. I can’t see any, not noise anyway. You’re not referring to the dust bands are you? I thought the stack was quite clean. Perhaps there’s some banding, which you can get with Canon Dslrs and there’s a script called canonbanding you can run to remove them but it didn’t seem to need it to me.

    Re the grey darks etc, are the individual cr2 images showing that grey as well when you use the screen transfer tool?

    Hmm the noise I was referring to is like so:

    image.thumb.png.ea9244da9c83d5cf82176e9a66b1a4e5.png

    Noise strip that goes through the whole height of the picture.

    I will try that :)

    As for the gray images, I guess it was because I changed to pure raw, and when doing that I first need to debayer right?

    8 minutes ago, Scooot said:

    I think you’ve just changed that integration image. It looks ok to me, just stretched by the screen transfer a bit too much maybe.

    What do you mean by that? ^_^

  13. 1 hour ago, Gerry Casa Christiana said:

    I think the issue is more the length of your exposures. 20 seconds is not enough to get the detail you need. The background in the first picture is too bright. In photoshop try to get the black levels to around 30 that will help to see more detail in the galaxy. 

    Kind regards

    Gerry

    Well, I really don't expect much, what scoot posted is pretty much what I expect, and I think it's decent for starting up.

    29 minutes ago, Scooot said:

    Hi msacco,

    Firstly I'm by no means an expert but I downloaded your good lights, Bias and darks and had a go at this. As you said your sky is very dark & mine isn't so I was curious. :)  Flats would have helped and I agree more data would help but it isn't too bad considering.

    Andromeda.thumb.png.a6c2b76aad303b462b4ce4461327e609.png

     

    This is the workflow, all processed in Pixinsight.

    Integrated Bias, Integrated Darks, Calibrated Lights,

    1177010848_Photo29-07-2019112949.thumb.jpg.b5efef7dceb8f11b3acd16f280c33274.jpg

    Next Cosmetic Correction, and Debayer

    Then I used subframe selector to choose & weight the best lights. The first few weren't so good so I ended up using 69 of your images, so a total Light integration time of about 14 minutes. When I used blink to view your lights I wondered whether you were imaging through a thin layer of high cloud particularly the early ones but as the night went on they improved.

    Anyway, after this , Star Aligned, then Drizzle Integrated the lights.

    On the Stack, Background Neutralisation and Colour Calibration. (Couldnt get Photometric Colour Calibration to work for some reason).

    There was a lot of vignetting because you had no flats so I used Dynamic Background Extraction to remove some of it. Using Divide instead of subtract. Here are the settings.

    1712749101_Photo29-07-2019172539.thumb.jpg.8858ba40694a862bf142c84339b6b65a.jpg

    I ran it twice, the above are the settings for the second run. I then cropped the corners a bit with Dynamic Crop.

    After this I applied some Noise Reduction with Multiscalelineartransform using a grey duplicate mask. Followed by a little SCNR as it had a green colour cast.

    Next was a masked stretch, followed by more noise reduction, darkening the background and contrast tweaks with curves.

    Hope I haven't missed anything too important and this helps. The Light Vortex Tutorials are a good aid.

    First of all thank you so much for taking time into this, I really appreciate it!

    Second, that's some really decent results for the amount of frames I have, so that's already something, do you think there's anything that can be done with the noise strips? I think that kinda ruins the picture, and also why this happens? Is that common for DSLR camera? Or that's more noise than usual?

    One last question, for a reason my pixinsight started to produce only Gray images, for example if I integrate the bias/darks/etc, everything comes up as Gray like so:

    image.png.cf8fe7d725d41009b52fc1dcf2c67c1e.png

    Do you know why that happens? I tried looking at the settings but didn't see anything related..I also don't think I changed anything.

    Thanks again for all the help!!!

  14. Hi, so I got some pictures of Andromeda yesterday, but I'm having some troubles getting some good results out of it.

    I wonder if someone could maybe help me a bit with that.

    My issue mostly is that I simply have too much background noise strips which I simply don't manage to get rid of.

    This is a picture for example:

    integration_ABE1_ABE_ABE.thumb.jpg.53267513e13e04f2f10cb833190be136.jpg

    Another example after automatic background extractor in PI:

    image.png.4b4cabcfad849c7cc6c4d3506c847d93.png

    As you can see, there is A LOT of noise strips, I'm using pixinsight, also tried photoshop and gimp, but I don't manage to get much improvements.

    I have uploaded all the frames here: https://drive.google.com/drive/u/2/folders/10GIeiJ-CYRV9sOaxlDfUc294t4LrIejI which includes:

    Bias, Dark, and lights.

    The folders are Lights good which is all the good frames in my opinion, and the Lights folder is simply all the other light frames which in my opinion are a bit worse(doesn't mean they're that bad, and maybe it actually would've been better to use them as well).

    Anyway, if someone could give me some assistance, that would be really really useful, as I believe I could get some better results out of it.

    General information - The pictures has been taken from a REALLY dark place, around bortle 1-2 I'd say, though Andromeda is fairly low in the horizon at the moment, so that could make a bit of a difference, exposure time - 20 seconds, ISO - 1000.

    Thanks a lot!

    • Like 1
  15. 1 hour ago, carastro said:

    I have finally found a stacked and calibrated image of the Horsehead I took years ago with a DSLR.  (I might even have the stacked RAW data but that would be a real Pain to upload).

    Would you like me to post up the stacked and calibrated Horsehead for processing practice?

    Carole 

    That would be great! The more practice the better :)

  16. I was just filming my first DSO image yesterday, the laggon nebula as well :)

    This is the results I got to with the help of people in the forum here:

    image.png.1fd29e2ad765a76d4e3a72109f6b6d7d2.jpg.c99b5f4aa7305606dd2566f4b8eb0bfe.jpg

    That's really not amazing, but you can see some color and details.

    I'll advice you to post the RAW pictures here so people will be able to help you extract more details from the image, that's how I got to this result.

    I just had a thread which rolled into something similar to this, so you can take a look here: 

     The second page is mostly relevant, but maybe some other tips there will be useful.

    Good luck!

  17. 1 minute ago, vlaiv said:

    It's got to do with people programming different programs. Two different conventions on coordinate system orientation.

    In "normal" math we are used to X axis being to the right and Y axis being to the up (positive values increasing). Screen pixels work a bit different - top row of pixels on screen has Y coordinate 0 and it increases "downwards" (next is row 1, then row 2 and so on) - so there is "flip" of Y coordinate.

    If one simply loads file (that should be row 0 then row 1 then row 2, etc ...) and displays directly on the screen you get one vertical orientation. If one loads file into "math" coordinate system - it will be reversed in Y direction.

    How software operates depend on people who programmed it - some use math coordinate space and other just "dump" rows onto screen - hence Y flip between programs - but it is not a big deal as Vertical Flip operation is always present and it is "non destructive" - it does not change any pixel values it just reorders them (same goes for horizontal flip and 90 degree rotations).

    Yeah of course, I was just curious about it 😁

    Thanks for the detailed explanation once again. :)

  18. Just now, carastro said:

    Well I think that is pretty good for a first try, much better than my first attempt.

    Rome wasn't built in a day as they say, (do they have that expression in Israel?).  In other words, you are trying to do a lot of new things for the first time.  You can't expect to get everything right first time.  Hopefully it will be a few steps forward and some back each time and you'll gradually progress.

    Well done.

    Carole 

    Yeah we do, that's a pretty global phrase isn't it? :)

    I'm actually not expecting much at all, I mean, I do hope and try to get the most out of what I do, but I'm really happy with my current first attemp, it's pretty bad on the AP scale I guess, but it's still something.

    1 minute ago, vlaiv said:

    image.png.010ac74822f189c6b990944aef8a89ca.png

    Here is another attempt - processing in Gimp. I did "custom" vignetting removal - kind of works. Here is what I've done:

    Open stacked image in Gimp 2.10 (use 32 bit per channel format). Copy layer. Do heavy median blur on top layer (blur radius of about 90px or something like that) - leave next two parameters at 50% and use high precision.

    Set layer mode to division, and put layer opacity to something low like 10%. Merge two layers. Now just do levels / curves and you should get fairly "flat central region.

    That's really awesome as well, thanks for all the explanations, really appreciate that!

    One question I can't understand, I see the image being flipped often like that, when it happens and why? ^_^

    Thanks again!! I'll keep trying :)

  19. Just now, vlaiv said:

    It did look rather whitish :D

    image.thumb.png.8adca6a9f32cc484f4dcb5224ad45910.png

    I used super pixel mode as debayering method:

    image.png.824831f8a1e2f59b3811e5538337c0af.png

    And these were stacking parameters:

    image.png.d8ad4135046503304752df4c6b52461e.png

    After that I loaded image in StarTools (trial - I don't use it to process images otherwise but it can attempt to remove vignetting and that is why I used it), did basic develop, remove vignetting and color cast and did color balance.

    Took a screen shot of the result at 50% zoom (trial version won't let you save image for further processing).

    That's so awesome, thank you so much!

    I actually used your image and got the following results:

    image.png.1fd29e2ad765a76d4e3a72109f6b6d7d2.jpg.36b8f74aacfddb7327bfb6aecf2c83e7.jpg

    Obviously by getting more nebulousity color out I increased the noise tremendously and that's reflected in the stars and more parameters, but I find it kinda cool, we can see some more details now :)

    In close saturday I'll be going to a really dark site, so hopefully I'll be able to take much much better shots!

    • Like 1
  20. 4 minutes ago, vlaiv said:

    I don't like working with uncalibrated data, but here is quick attempt:

    image.thumb.png.30363118039286bc017a1bbf8f106173.png

    Lack of flats really show - there is enormous vignetting that is hard to remove. This is quick process in StarTools after stacking in DSS - I tried to wipe the background and remove vignetting, but as you can see, it did not work very well, still I kind of like the effect.

    There is some nebulosity showing - there is something to look at :D

     

    Wow! This is so much better than what I managed to get!

    How did the after-stacking image looked like to you? For the it was like almost plain white.

    Can you briefly state what settings did you use so I'll be able to try getting the same result? :x

  21. 1 hour ago, Starwiz said:

    There shouldn't be any difference.  If you focus on something on the horizon, it's effectively at infinity as far as the camera is concerned.  So, it will just be a matter of getting the exposure right and tweeking the focus.

    John

    Yep, that seems to work well! :)

    So I managed to get some photos of the lagoon nebula, I didn't took much photos of it as the main purpose was for testing, I tried processing the image but it was so hard, I leave in a fairly dark town high 'relatively' low amount of light pollution, still the images looks like they really suffer from light pollution.

    I did 800 ISO 20 seconds subs. Took around 40~ frames, of these only 29 were good enough.

    I know 29 frames is really barely nothing to get good images, I also didn't take bias,darks,flats simply because I only wanted to first see that I manage to get anything.

    I tried processing the image but it didn't work that well, I'm aware of the fact that I probably won't get much with only 29 light frames, but I wonder if maybe some of the more experienced guys here would've been still able to get some nice results out of it.

    After stacking the image in deep sky stacker, the image was almost completely white, I tried playing with the settings but that didn't change much, wondering if I'm doing something wrong in the settings.

    If anyone feels like it, here are the 29 RAW photos:

    https://drive.google.com/open?id=1aequL0cJQLllTAeQhJkt3QFUKXiWbW-F

    Maybe someone can get anything decent out of it? (by my level of 'decen't, I mean bad result for others, but something that's still cool to see).

    Thanks for all the help! :)

    • Like 2
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.