Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

ONIKKINEN

Members
  • Posts

    2,422
  • Joined

  • Last visited

  • Days Won

    5

Posts posted by ONIKKINEN

  1. On 01/09/2021 at 09:28, tomato said:

    I would do everything in APP. Calibrate and stack each panel first, then combine the stacked panels to create the mosaic. You need to experiment with the Multi Band Blending and Local Normalisation Correction settings to achieve the best seamless result.

    Take a look at this excellent guide by Sara Wager:

     https://www.astropixelprocessor.com/how-to-create-a-mosaic-in-easy-steps-by-sara-wager/

    I don’t know the spec of your processing PC, but the processes can take a while  on a lower spec machine.

    Thanks for the link, mostly helpful. But the example in the tutorial has 29 subs and doesn't do star analysis or registration, which are the phases that took my 250 subs around 4 hours. I just found it weird that APP hangs on to the process for so long, i had to leave the PC running as i went to sleep because it had already taken at least 6 hours. Wondering if i did something wrong? Mostly just followed the recommendations in APP. My PC is not afraid of a slight breeze, its an overclocked 6700K with decent DDR4 RAM, if that means anything to you, so weird that it took so long.

    Anyway as a proof of concept i combined 5 panels worth of data into one and got it working fairly well. There are seams but they are far less obvious than i expected. Also the middle parts are missing the blue halo of young hot stars as it has the least data. I think i got the hang of it now, just need to set aside a full processing day to get this done whenever i return to a mosaic project.

    M31-mosaicJ2.thumb.jpg.b7001cee80f52df8a63eeda6db84160a.jpg

    • Like 1
  2. On 02/09/2021 at 23:04, nfotis said:

    It seems that the sensor is quite good. This camera looks a plain vanilla version compared to others (no tilt plate etc).

    If you are getting these results without LP or NB filters, I wonder how well this camera will work using filters.

    N.F.

     

     

    The sensor behaves as expected from the IMX571 so: very good.

    I don't think using light pollution filters in galaxies is a good idea, as galaxies are very bright in light pollution blocking spectrums. Quite honestly i have had no trouble with light pollution, its almost like its not there. Just colour balance and its gone, the 16bit ADC and high colour sensitivity will retain the data through some pretty nasty LP.

  3. You have nice detail in there, all the spiral arms are present.

    Looks like maybe you have clipped the whites in processing as the core is very bright, also colour balancing might have taken a hit in the process.

    Light pollution filters in general are not very helpful with galaxies as galaxies are the brightest around the same wavelengths as light pollution, also makes colour balance more difficult due to the missing colours.

    I found processing only in photoshop quite difficult at first, actually i still do if i only use PS. Its difficult to see whether you've clipped the data and you generally cant see what you're working with until you stretch the stack. If you want to try something else i can recommend SIRIL, a free astrophotography processing software. You can stack in DSS, colour balance and stretch in SIRIL and then do final touches in Photohop. SIRIL is easy to use (for an astro processing software) and makes clipping whites or blacks entirely optional. Also it has a photometric colour calibration tool in it, pulling the correct colour from the stars recognized in the picture itself, not something you have to balance yourself. Might not work with light pollution filters though, as they cut off significant portions of the spectrum.

    • Like 1
  4. 38 minutes ago, Skyline said:

    How long have you been using that RisingCam IMX571 and what do you think about it compared to say the likes of zwo?

    Just dipping my toes into the dedicated astro cam world with this being my first one, so cant compare it to first hand experiences other than a DSLR which i believe is so far out of a fair comparison that it doesn't even make sense.

    The camera performs extremely well and is a joy to work with. No obvious hiccups with N.I.N.A that come to mind. There is no amp glow or pattern noise of any kind and the cooler works well and fairly accurately. The cooler overshoots its target quite a lot at first but stabilizes in a few minutes and returns to the set value, its at the set value after polar alignment with sharpcap pro and the initial faff of setting everything up so no real downtime in use. Looking at pictures taken with this and a ZWO2600MC it would be impossible to tell the difference, as they share the same chip.

    From a mechanical standpoint it is a bit different from ZWO and QHY offerings, but then again it is 800-900e cheaper. Youll need to buy adapters as the camera comes with just a few nosepieces, a UV/IR filter and a tiltplate if your model has sensor tilt. Even with these its still in a category of its own with pricing. Glowing recommendation from me!

    • Like 2
  5. 5 minutes ago, Astroscot2 said:

    A lovely natural looking image,  can I ask what field flattener you used with the newtonian

    Thanks, exactly what i want from my galaxy shots!

    Its a TS-optics 0.95x Maxfield comacorrector. Its not so apparent at this 7.52 micron binned pixelsize but it leaves a bit of coma in the edges. Not a pixel peepers choice for sure.

    • Like 1
  6. 1hr43min of 30s subs

    M33-1h43min3J.thumb.jpg.9cebcbcaa3ef8f1a48d93707d2866ab4.jpg

    Taken with a OOUK VX8 and RisingCam IMX571 mounted on a Skywatcher EQM-35 PRO from bortle 6-7 on the night between 1-2.9 during a partial Moon in the sky. The Moon didn't end up bothering all that much other than an annoying extra gradient to get rid of. Guiding was mostly on the worse side of 1 arcsecond RMS, hence the 50% resize and slight crop.

    Processed in DeepSkyStacker - SIRIL - Photoshop.

    I find it interesting that a OSC camera picks up quite strong H-alpha signal from the brightest clusters with such short exposures. Also seeing individual stars in another galaxy just seems so strange to me somehow, i always expect a galaxy to just be a uniform mess from so far away.

    • Like 28
  7. On 29/08/2021 at 00:07, tomato said:

    It’s very tempting to try and fit extended objects like M31 in the minimum of panels but the downside is you sometimes have to orientate the object to some odd angles which can detract from the finished image.

    Here are two examples, using the same telescope and camera, a 6 panel mosaic which gave a ‘widescreen’ M31, and a 12 panel version which put the galaxy in the more traditional diagonal orientation.

    The big advantage was the 6 panel version was captured as 6x1 hr panels in a single clear night in October, the 12 panel version took close on 3 months to complete.

    0EB1B8AF-14F0-4AE8-A992-74E89866E485.thumb.png.4da5eef9f02b162aa054cce13e703524.pngF451F228-409B-4A89-BC36-F58564BAD681.thumb.jpeg.2ddd89d449ece8c5ffc7e98524eaad62.jpeg

    Astro Pixel Processor will do a decent job of combining the panels, especially if they are taken under similar conditions, this was used on the 6 panel version. However, removing joins and gradients taken over many sessions can be a real challenge, by far the best results I have obtained have been with the Photometric Mosaic script available in Pixinsight. 
     

    Best of luck with your M31 project, I will be attempting another mosaic on this iconic target shortly (or when the weather actually permits).

    I have just tried combining the stacks from DSS to do the mosaic in APP. There are obvious lines between the stacks. Should i do all of the process in APP? I tried but it had already taken an hour of registration and it was nowhere near done so i just canceled it, is this normal for APP? For comparison i dont think the process took more than 20 minutes from unloading my memory card to having all 4 panels stacked in DSS.

  8. On 30/08/2021 at 19:53, ONIKKINEN said:

    Edit: The mess in the top left corner is caused by my focuser sagging under the newly increased weight of the imaging train compared to a DSLR. Well maybe not so much the weight but the "lever" effect from having the weight be further away from the focuser than a DSLR that sits right on it.

    Actually, upon further investigation it looks like also a fair bit of sensor tilt. I took apart my focuser, tightened things down a bit, reoriented it to 90 degrees so that the stronger up-down axis is vertical in typical operation towards the zenith-ish and it helped a bit, but not entirely.

    I will have to look for a tilt-plate to add somewhere in the imaging train to fix it, in the meanwhile i will have to cut a good 1/3 or so of the frame to hide the uneven field.

     

  9. First light has come and gone!

    M81-firstlight-1h20min.thumb.jpg.d249d08b9d3c51cda5bafb078ea0917d.jpg

    Just a quick and dirty shot of M81-M82 with 80 minutes of 30s subs from bortle 6-7. Mostly the point for this was to see the colour response (particularly in M82 H-alpha) and what kind of other issues it may have. This was an ideal target for me since i already imaged this with a DSLR so i have at least some way to compare results. Looks like the starburst H-alpha from M82 is nicely red, starcolours are there and faint fuzzies start appearing at this low integration time. Even Holmberg IX is a somewhat detectable smudge below M81. Taken through a Baader UV/IR cut filter in the image train as the camera advertises sensitivity up to 1000nm which im not interested in.

    The data is a joy to work with, absolutely no amp glow and generally just very noise-free. Bias frames have a median ADU of 768 (which is the offset) and 30s darkframes at -10 have 769.

    The upper left corner of the image is a bit concerning to me with its oblong stars. I checked collimation after shooting (forgot) and it wasn't quite right. I did collimate before leaving but i must have banged it somewhere during transportation. Also could have problems with backspacing or sensor tilt. Hoping its not sensor tilt as the camera has no tilt-plate to control this. Also could be polar alignment drift because i did not guide in DEC, more nights out will tell what was the cause.

    Edit: The mess in the top left corner is caused by my focuser sagging under the newly increased weight of the imaging train compared to a DSLR. Well maybe not so much the weight but the "lever" effect from having the weight be further away from the focuser than a DSLR that sits right on it.

    • Like 3
  10. 60618681_20210830_0239502.thumb.jpg.04dc3f233606ac29c33e9f57ec0daf6d.jpg

    A sudden rare weather phenomena where the sky is not fully covered in clouds appeared, got to start my post summer DSO season!

    Everything is going wrong of course, broke my bahtinov mask in transportation so had to focus on the moon, which i would of course wish was not up messing with background lighting. Forgot to balance the scope before PHD2 calibration... Too excited to begin again. Still nice to actually do astronomy again.

    • Like 8
  11. 3 hours ago, AR86 said:

    So after waiting for 2 weeks for a clear night, this is my first lunar image taken through my new/first setup. I'm using a 130PDS on an EQ3-2, prime focussed with a Sony A6300. I stacked the image in registax from 6 images, cropped in photoshop and am fairly happy with the result.

    Apart from the left side of the image, the moon seems too smooth on the side facing the sun, is this just the result of direct light not causing shadows on the surface or should I be adjusting the contrast to try and pull some detail out of it?

    Any other (constructive) criticism is more than welcome, very eager to learn more!

    Adam

     

    I hope you dont mind but i ran this image through Lightroom and Photoshop to extract some detail out of it.

    AR86-Lunatry.thumb.jpg.433129145acd4f1d5bfde80e0f533410.jpg

    I am certainly not a Lunar expert, but took a shot at this mostly for practice too.

    I reduced highlights, increased shadows, applied texture, dehaze, clarity and a bit of sharpness to it in lightroom. In the end exported to photoshop, applied auto-color, a saturation boost to bring out the natural colour of the Moon with its different elements and ran Topaz AI denoise with high sharpening. Topaz AI denoise is not free and in this case had a very slight effect, but it is an effective tool if not overused most of the time.

    The problem i ran into when imaging with a 130mm newtonian and a DSLR was that of dynamic range, which is a huge issue with the Moon. The bright parts are stupidly bright and the terminator where most of the interesting detail is, is quite dark. What you can do is expose for either of these, but not both. Then correct the one not exposed for in post. Note that you cannot recover detail from completely white clipped data! This shot is not white clopped, apart maybe from the bright craters but i always struggle with those too.

     

    Extracting detail from the lit side of the Moon is always a losing battle, there just isn't much because of the angle of the sun. The fact that you can sort of see crater edges forming in your shot is reason to believe that you had good focus and a reasonably well exposed image. If anything you could use more frames to even out the issues caused by seeing and the atmosphere. The more images you can bother taking the better it will be, although with this sort of resolution around 200 or so will probably be close to the best it can be, if the conditions are average.

    Its a good shot! If you take some more frames you'll definitely see improvements in the lit-side detail.

    If you want to take something away from this: Try applying some of the Lightroom edits on the picture, especially sharpening. Sharpening really does wonders on Lunar full-disk shots!

    • Like 2
  12. 19 minutes ago, ollypenrice said:

     

    I'm not sure that I'd call this field rotation. I think the problem is geometric. The sky we are photographing is, in effect, seen on the inside of a sphere but our final image will appear on a flat surface. This is a familiar problem in cartography. There are different projections of the 3D earth onto 2D paper maps, each with its own strengths and weaknesses. When creating a large mosaic you need to decide on your field geometry. If you just pick a random panel from your set the software will treat that panel's geometry as definitive and all other panels will work outwards from that one. The best you can do in this case is start with a panel in the centre of your image.

    A better idea is to take a widefield image covering all of your intended mosaic, centered on the same point. This can be resampled upwards to the size of the intended mosaic (it will look terrible but that doesn't matter) and it will become your registration template for all your mosaic panels. Your final mosaic will have the field geometry of your widefield image.

    Olly

    I do have a 55-250mm kit lens and a Canon 550D, would this work for the widefield template or is it a better idea to take a centered frame with the main imaging scope?

  13. 2 minutes ago, tomato said:

    It’s very tempting to try and fit extended objects like M31 in the minimum of panels but the downside is you sometimes have to orientate the object to some odd angles which can detract from the finished image.

    Here are two examples, using the same telescope and camera, a 6 panel mosaic which gave a ‘widescreen’ M31, and a 12 panel version which put the galaxy in the more traditional diagonal orientation.

    The big advantage was the 6 panel version was captured as 6x1 hr panels in a single clear night in October, the 12 panel version took close on 3 months to complete.

    0EB1B8AF-14F0-4AE8-A992-74E89866E485.thumb.png.4da5eef9f02b162aa054cce13e703524.pngF451F228-409B-4A89-BC36-F58564BAD681.thumb.jpeg.2ddd89d449ece8c5ffc7e98524eaad62.jpeg

    Astro Pixel Processor will do a decent job of combining the panels, especially if they are taken under similar conditions, this was used on the 6 panel version. However, removing joins and gradients taken over many sessions can be a real challenge, by far the best results I have obtained have been with the Photometric Mosaic script available in Pixinsight. 
     

    Best of luck with your M31 project, I will be attempting another mosaic on this iconic target shortly (or when the weather actually permits).

    I agree that i prefer Andromeda to be a bit diagonal. Its a weird opinion since what points to what direction is completely arbitrary and all opinions and orientations are "correct". This is however not a problem since i will just rotate the entire imaging train to 90 degrees to reach this orientation.

    M31-4panel.thumb.PNG.4b3631091c852df5012096847b5f5489.PNG

    Something like this is my plan. It could use a few more panels but if i get this to work and all frames to be more or less on point it would be good for me. Local weather is dreadful and i don't have a backyard, so 10+ hour projects are not something im looking for right now.

    Astro pixel processor does look pretty nice. Even comes with a free trial and a yearly rent period. I am aware of pixinsight too but that software looks like it was written by aliens, for aliens, with alien language. Complete gibberish to me when I've tried looking at some tutorials.

  14. 18 minutes ago, symmetal said:

    A point to note is that with an EQ mount there will be image rotation between mosaic frames that differ in RA, so you need to ensure any overlap covers this. The rotation amount depends on the Declination of the target. At Dec 0 there is no rotation and it increases with an increase in Dec being very significant nearer the pole. Here's the effect at around Declination 60

    583172372_Mosaicrotation.thumb.jpg.5b92410356f831e42c4899f0795a0103.jpg

    Alan

    Never heard of the effect before, thanks for the heads up. I think this is the reason my previous shots have all had some sort of field rotation if i shot on different days and didn't quite nail the framing right.

  15. 29 minutes ago, ollypenrice said:

    I would say that a 20% overlap will make life easier. Automated mosaic software may well handle a two-panel quite easily but, as any mosaic grows, it becomes increasingly unlikely that automated software will succeed. This kind of software is very competent with daytime images but astro images are massively stretched and present a much bigger challenge.

    As for exposure time, how deep do you want to go? I was intrigued by the size of M31 on star charts. It was way bigger on the charts than on most images, so I decided to try a set of 30 minute subs to see if I could find the galaxy's outer reaches as seen on the charts.

    spacer.png

    For mosaics I use Registar in conjunction with Photoshop.

    Olly

     

    I should have maybe mentioned that i am using an EQM-35PRO, which is not a very nice match (read:nightmare) for the 8inch newton. I will not be going over 60s subs in any scenario and preferably i would go for shorter. I do have a new camera that is still yet to see its first light, but it has 16bit adc, 80%+ QE, 14 stops of dynamic range and practically no noise so im hoping 30s or even shorter exposures bring out the halo at least to some extent. What i was wandering how long the integration time should be per panel with a 2x2, 3x3 or even a 4x4 bin in the final combined picture. I will need at least 3 panels, preferably 4 to get roughly the field of view as in yours.

    Yours looks fantastic with the well captured halo much farther out than i usually expect to see, would be happy for a half as good capture as that!

  16. The only target that doesn't fit my setups field of view that im interested in (200mm F4.2 newton, aps-c sensor) is the Andromeda galaxy, which i obviously want to image if the clouds ever go away.

    I am using NINA so creating the mosaic itself is nothing but a click away in the framing tool, but i have no idea about the specifics, for example how much should i overlap the frames? My scope being a newtonian will probably have some not-quite-corrected coma and other tracking artifacts at the edges so i assume 10% is not enough. The other unknown is exposure time per panel. In one hand i think i could get away with as low as 30 minutes per panel since i will definitely bin the final picture anyway but is that enough? Ideally i would be able to shoot this in one night to get the conditions as close to eachother as possible per panel, or is this overthinking it? I would like to plan ahead and not spend valuable time outside tinkering with the details.

    As for the processing part of actually combining the panels is a complete unknown for me, i know some software can do this but are there any recommendations from people who have done the same? I assume i would roughly process each panel first and then do the combining. The exposure time per panel problem of course goes away if some software can equalize different gradients and background levels from different sessions.

  17. 3 hours ago, Adreneline said:

    An amazing result - your image has a real sense of three dimensional depth to the galaxy. All too often images of M101 - including my own! - appear quite two dimesional and flat.

    Adrian

    Im happy you like it!

    Not sure i understand the 3D-aspect fully, but from the failed versions of the same data set i have processed i think i can sort of get what you mean. Its very easy to "deep fry" an image of M101 with light pollution in the mix and getting the right mix of even background and the faintest spiral structures took too many tries for me. At least the lower part of the spiral was very easy to process out when trying to "fix" other parts of the image.

    • Like 1
  18. Consider buying a camera outside of the usual suspects of manufacturers.

    I bought the RisingCam IMX571 sensor colour camera and it works great. Obviously its been cloudy for weeks since i bought it but lots of people are happy with theirs and so am i, and the price is really competitive compared to the monopoly-gang manufacturers like ZWO and QHY who have just decided to ask 2200 euros for these products for no real reasons other than they can.

     

    https://www.aliexpress.com/item/4001359313736.html?spm=a2g0s.9042311.0.0.5b604c4dWaHMUL

     

  19. I followed the astrometry.net pathwway down the rabbithole and found lots of interesting faint fuzzies! https://www.legacysurvey.org/viewer/?ra=210.8357&dec=54.3581&layer=unwise-neo6&poly=209.4818,54.0860,211.8538,53.7734,212.2074,54.6149,209.7887,54.9339,209.4818,54.0860

     

    Some have redshifts greater than Z=0.1 which places them somewhere in the 1-2 billion lightyear range! Not much more than single pixels of course but i almost cannot believe an 11 year old DSLR can capture any pixels of something like this...

    • Like 1
  20. 245x60s of exposure

    M101-4hr-resize.thumb.jpg.28075994634650adcdeb0594159fb19d.jpg

    Taken with an OOUK VX8 on a Skywatcher EQM-35PRO and a Canon EOS 550D. Guided with a 60mm F4 guidescope and an ASI 120mm, controlled by a WIN10 minipc remote controlled with a tablet. Guiding was not great, hence the 50% resize. I think the RMS was somewhere around 1.5-2.5 most of the time. Bortle rating of somewhere around 6-7.

    This is old data that i have been sitting on since late april when this was captured, i have been processing it every now and then but never thought that its "finished" yet. I think this one looks pretty good. I took around 7 hours of 60s subs on M101 during three nights just before the spring season ended, this shot consists of 245 one minute subs as most of them are ruined beyond use, or just good enough to stack but will produce extra noise. This was my 4th actual DSO imaging target with a GO-TO mount so definitely a work in progress.

     

    Problems that are difficult to fix now but easy to avoid later would be to take proper flats in situ and not later. Slight changes in camera angles ruined some of the flats and led to unnecessary gradients. I think i also collimated the scope and still used the same flats afterwards, which obviously in hindsight is not a very good idea.

     

    Processing in SIRIL and photoshop. Nothing fancy in PS, some masked saturation adjustments, lightroom tweaks and denoising in topaz.

    • Like 17
  21. 14 minutes ago, jetstream said:

    Seriously?

    You should try driving over here or in the rural USA.. not sure about yours but our headlights work well.

    Finland is probably the most desolate country in Europe, outside "big" cities there is nothing but forests and unlit rural roads so i definitely know what you mean, but i think you might have missed the point.

    The point was that someone will have broken headlights, someone will wear dark clothing with no hi-vis strips in the dead of night and think that "that car sees me, i mean im crossing the road they must see me", some cyclist will cycle without lights in the dead of night, but this is an issue only in cities with higher traffic (also where there should be lighting). Road lights remove most of these issues. Of course it doesn't make sense to light up every road everywhere, if some rural road sees 5 cars per hour then its probably not a good use of money to light that empty stretch of road. Nobody out there is seriously planning on covering all rural areas with city lights, its just that rural areas are disappearing pretty fast around the world.

  22. 7 hours ago, Nik271 said:

    Its so simple to deal with light pollution, especially with modern street lights which can be remotely controlled.

    Lighting costs  can be something like 10% of a town electricity bill and you'd imagine dimming the bloody things after midnight is a no-brainer, saves both energy and money.  Hopefully with the projected energy price increases and the plan to charge cars and heat houses with heat pumps this should translate to a drive to save it at night for where its really needed.

    Just one accident that is even vaguely related to dark roads is enough to permanently bury this idea, at least that's what i think. All it takes is one stick carried by the wind or a piece of plastic broken off someones old busted bumper on a road and a motorist dies. Dark roads are deathtraps and its perfectly reasonable to want every road to be fully lit at all times, including midnight. If roads are not lit drivers will need to rely on high-beams or external lights, which are also dangerous for other people.

    Some amateur astronomers somewhere complaining about city lights will be forgotten as soon as safety is concerned. If all cars had working headlights, if everyone used high-visibility clothing outdoors, if every cyclist had a light it would be possible to dim city lights but since this is not at all true i doubt it will happen in any big urban area in any part of the world.

    • Confused 1
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.