Jump to content

ONIKKINEN

Members
  • Posts

    2,530
  • Joined

  • Last visited

  • Days Won

    8

Everything posted by ONIKKINEN

  1. Id like to think that I'm an astronomer's astronomer and a butterfly astronomer. I do want to learn something from the target that i image by looking at the image i took so the target is always the priority, not so much how pretty the target is. Of course if its both then that ticks all the boxes for me. Galaxies are my thing and you can always tell something about a galaxy as long as its not a tiny pile of pixels in the corner. But the real choice is missing from the list: Weather forecast enthusiast.
  2. I had a problem with DSS reporting bogus numbers with star counts. It had decided that a frame which was obviously failed to my eyes had more than a hundred stars, hence the additional metrics that i also use. In this case i was close to the meridian and my RA motor was slipping every couple of seconds because of an under performing mount, leading to bunched up star trails. Basically the brightest 10 or so stars trailed intermittently to look like a 100 stars in a tight line to DSS.
  3. Did you use kappa-sigma clipping? I had a case where DSS decided to stack all of my frames to a single frame that was nothing but clouds and startrails, the resulting image came up pretty much empty. Turns out kappa-sigma clipping just rejected almost all pixels. Try using the average setting in the lights tab to see if something funny is going on.
  4. The app means your polar scope, not the actual finder scope attached to your telescope. You need to polar align so that polaris is in the same position as the app shows. To do this you might need to first rotate your RA axis so that 6 points down and 12 points up as many polar scopes are not that well aligned. Basically just try to adjust the alt/az knobs until you see the exact same thing as the app shows.
  5. Ditto on this one. Going through FITS files one by one and manually inspecting them is very time consuming and not always really possible, for instance when your lights are just barely over the read noise level and show almost nothing but black. You can throw these into DSS and inspect them this way, but its still far from optimal. Also not worth it when you have so many subs (119 i believe?). But NINA makes it easy to just analyze the images with a glance. What i do with NINA is have the guiding RMS error in arcseconds, star count and star HFR in the file name itself, you can set these in the options. If the RMS is too high, guiding was bad and i throw it out. If star count is too low but HFR is good, probably clouds or something external like accidentally shining a light near the telescope, if HFR is bad then the focus was out. It takes a bit of getting used to from a DSLR user but its just something that i had to accept that i cant actually see the subs myself easily. If you must see the subs you can always view them in NINA as stretched and debayered images, but only before shutting down the session and NINA for that night.
  6. I use a finderscope to find planets. First you will need to align it as perfectly as reasonably possible, preferably on a still target like a streetlamp or another ground object far away, then just manually move slew to scope to the target. This way polar alignment with a compass is probably enough.
  7. Its easy to think that to combat light pollution you would need a light pollution filter to block this, but hear me out on this little "thought experiment". What is light pollution? It is mostly street lights, house lights and other types of lighting that allows humans to see at night. Why is lighting a specific colour, usually around the yellow-green-white(if LED, which is also green because blue green and red mixed is white, hence the brightest part is in green that is in the middle) light. These wavelengths are chosen because they are where the human eye is the most responsive. So in this way the least amount of energy could be used to provide the greatest amount of usable lighting at night (for humans). Now why is the human eye evolved this way? Well as it turns out there is a natural light source, pretty much the only one, that is the Sun. The sun is at its greatest somewhere around the whitish-yellow parts of the spectrum, also where the light pollution filters are blocking most of their light. As a mostly coincidence most galaxies are brightest somewhere around sunlight in colour. This changes a bit, actively star forming galaxies (like triangulum!) are noticeably bluer than the sun, while galaxies with pretty much no active star formation like ellipticals (M87, M32, M110) are mostly blobs of redder than the sun stars. This is because blue stars can only live for a blink of an eye in cosmic terms, so if you see a blue star it must have been born recently, as it will also die very soon. Older galaxies with no active star formation only contain the older smaller stars that are still happily fusing hydrogen for billons or trillions of years. Anyway, point being the average colour of galaxies is somewhere pretty close to sunlight (a coincidence). So, light pollution filters are essentially "galaxy light" filters also since light pollution and galaxy light is very similar. The reason why this is not a problem (mostly) is because light pollution is not a sheet of cloth over your telescope that physically blocks light, it is an added colour. And colour is something that is easy to balance out in processing, especially if you have a camera like the IMX571 that has an almost unbelievable colour response with very short exposure times. I should add that it is NOT possible to properly colour balance a shot taken through a light pollution filter, as a big portion of the spectrum is missing. Here you can see a measured spectrum of M33, a very blue galaxy that is one of the least effected galaxies for light pollution filters. The light pollution blocking bands are mostly between H beta and H alpha, which is most of the spectrum (but not the peak, because M33 is bluer than average). Edit: Also i might add that it might be possible for local light pollution to be greater than the capabilities of the camera to produce proper colours. In this case there is really no right answer, either take the hit of the LP filter or travel to better skies. But i have really never imaged from better than bortle 6 skies, and most of my imaging is from bortle 7-8 from last winter with an 11 year old DSLR that is nowhere near as good as the IMX571 chip and still it is possible to get proper colour balance without filters. Edit2: Of course if you intend to image non-broadband targets like nebulae you will greatly benefit from narrowband filters. Well with an OSC camera it would ideally be a duo-band filter like the L-extreme from optolong. Nebulae emit mostly only very specific wavelengths that are easy to isolate from the rest of the spectrum with these filters, and with this method you can gather good data from right under a streetlight, if you want to.
  8. Thanks for the link, mostly helpful. But the example in the tutorial has 29 subs and doesn't do star analysis or registration, which are the phases that took my 250 subs around 4 hours. I just found it weird that APP hangs on to the process for so long, i had to leave the PC running as i went to sleep because it had already taken at least 6 hours. Wondering if i did something wrong? Mostly just followed the recommendations in APP. My PC is not afraid of a slight breeze, its an overclocked 6700K with decent DDR4 RAM, if that means anything to you, so weird that it took so long. Anyway as a proof of concept i combined 5 panels worth of data into one and got it working fairly well. There are seams but they are far less obvious than i expected. Also the middle parts are missing the blue halo of young hot stars as it has the least data. I think i got the hang of it now, just need to set aside a full processing day to get this done whenever i return to a mosaic project.
  9. The sensor behaves as expected from the IMX571 so: very good. I don't think using light pollution filters in galaxies is a good idea, as galaxies are very bright in light pollution blocking spectrums. Quite honestly i have had no trouble with light pollution, its almost like its not there. Just colour balance and its gone, the 16bit ADC and high colour sensitivity will retain the data through some pretty nasty LP.
  10. You have nice detail in there, all the spiral arms are present. Looks like maybe you have clipped the whites in processing as the core is very bright, also colour balancing might have taken a hit in the process. Light pollution filters in general are not very helpful with galaxies as galaxies are the brightest around the same wavelengths as light pollution, also makes colour balance more difficult due to the missing colours. I found processing only in photoshop quite difficult at first, actually i still do if i only use PS. Its difficult to see whether you've clipped the data and you generally cant see what you're working with until you stretch the stack. If you want to try something else i can recommend SIRIL, a free astrophotography processing software. You can stack in DSS, colour balance and stretch in SIRIL and then do final touches in Photohop. SIRIL is easy to use (for an astro processing software) and makes clipping whites or blacks entirely optional. Also it has a photometric colour calibration tool in it, pulling the correct colour from the stars recognized in the picture itself, not something you have to balance yourself. Might not work with light pollution filters though, as they cut off significant portions of the spectrum.
  11. Just dipping my toes into the dedicated astro cam world with this being my first one, so cant compare it to first hand experiences other than a DSLR which i believe is so far out of a fair comparison that it doesn't even make sense. The camera performs extremely well and is a joy to work with. No obvious hiccups with N.I.N.A that come to mind. There is no amp glow or pattern noise of any kind and the cooler works well and fairly accurately. The cooler overshoots its target quite a lot at first but stabilizes in a few minutes and returns to the set value, its at the set value after polar alignment with sharpcap pro and the initial faff of setting everything up so no real downtime in use. Looking at pictures taken with this and a ZWO2600MC it would be impossible to tell the difference, as they share the same chip. From a mechanical standpoint it is a bit different from ZWO and QHY offerings, but then again it is 800-900e cheaper. Youll need to buy adapters as the camera comes with just a few nosepieces, a UV/IR filter and a tiltplate if your model has sensor tilt. Even with these its still in a category of its own with pricing. Glowing recommendation from me!
  12. Thanks, exactly what i want from my galaxy shots! Its a TS-optics 0.95x Maxfield comacorrector. Its not so apparent at this 7.52 micron binned pixelsize but it leaves a bit of coma in the edges. Not a pixel peepers choice for sure.
  13. 1hr43min of 30s subs Taken with a OOUK VX8 and RisingCam IMX571 mounted on a Skywatcher EQM-35 PRO from bortle 6-7 on the night between 1-2.9 during a partial Moon in the sky. The Moon didn't end up bothering all that much other than an annoying extra gradient to get rid of. Guiding was mostly on the worse side of 1 arcsecond RMS, hence the 50% resize and slight crop. Processed in DeepSkyStacker - SIRIL - Photoshop. I find it interesting that a OSC camera picks up quite strong H-alpha signal from the brightest clusters with such short exposures. Also seeing individual stars in another galaxy just seems so strange to me somehow, i always expect a galaxy to just be a uniform mess from so far away.
  14. I have just tried combining the stacks from DSS to do the mosaic in APP. There are obvious lines between the stacks. Should i do all of the process in APP? I tried but it had already taken an hour of registration and it was nowhere near done so i just canceled it, is this normal for APP? For comparison i dont think the process took more than 20 minutes from unloading my memory card to having all 4 panels stacked in DSS.
  15. Actually, upon further investigation it looks like also a fair bit of sensor tilt. I took apart my focuser, tightened things down a bit, reoriented it to 90 degrees so that the stronger up-down axis is vertical in typical operation towards the zenith-ish and it helped a bit, but not entirely. I will have to look for a tilt-plate to add somewhere in the imaging train to fix it, in the meanwhile i will have to cut a good 1/3 or so of the frame to hide the uneven field.
  16. First light has come and gone! Just a quick and dirty shot of M81-M82 with 80 minutes of 30s subs from bortle 6-7. Mostly the point for this was to see the colour response (particularly in M82 H-alpha) and what kind of other issues it may have. This was an ideal target for me since i already imaged this with a DSLR so i have at least some way to compare results. Looks like the starburst H-alpha from M82 is nicely red, starcolours are there and faint fuzzies start appearing at this low integration time. Even Holmberg IX is a somewhat detectable smudge below M81. Taken through a Baader UV/IR cut filter in the image train as the camera advertises sensitivity up to 1000nm which im not interested in. The data is a joy to work with, absolutely no amp glow and generally just very noise-free. Bias frames have a median ADU of 768 (which is the offset) and 30s darkframes at -10 have 769. The upper left corner of the image is a bit concerning to me with its oblong stars. I checked collimation after shooting (forgot) and it wasn't quite right. I did collimate before leaving but i must have banged it somewhere during transportation. Also could have problems with backspacing or sensor tilt. Hoping its not sensor tilt as the camera has no tilt-plate to control this. Also could be polar alignment drift because i did not guide in DEC, more nights out will tell what was the cause. Edit: The mess in the top left corner is caused by my focuser sagging under the newly increased weight of the imaging train compared to a DSLR. Well maybe not so much the weight but the "lever" effect from having the weight be further away from the focuser than a DSLR that sits right on it.
  17. A sudden rare weather phenomena where the sky is not fully covered in clouds appeared, got to start my post summer DSO season! Everything is going wrong of course, broke my bahtinov mask in transportation so had to focus on the moon, which i would of course wish was not up messing with background lighting. Forgot to balance the scope before PHD2 calibration... Too excited to begin again. Still nice to actually do astronomy again.
  18. I hope you dont mind but i ran this image through Lightroom and Photoshop to extract some detail out of it. I am certainly not a Lunar expert, but took a shot at this mostly for practice too. I reduced highlights, increased shadows, applied texture, dehaze, clarity and a bit of sharpness to it in lightroom. In the end exported to photoshop, applied auto-color, a saturation boost to bring out the natural colour of the Moon with its different elements and ran Topaz AI denoise with high sharpening. Topaz AI denoise is not free and in this case had a very slight effect, but it is an effective tool if not overused most of the time. The problem i ran into when imaging with a 130mm newtonian and a DSLR was that of dynamic range, which is a huge issue with the Moon. The bright parts are stupidly bright and the terminator where most of the interesting detail is, is quite dark. What you can do is expose for either of these, but not both. Then correct the one not exposed for in post. Note that you cannot recover detail from completely white clipped data! This shot is not white clopped, apart maybe from the bright craters but i always struggle with those too. Extracting detail from the lit side of the Moon is always a losing battle, there just isn't much because of the angle of the sun. The fact that you can sort of see crater edges forming in your shot is reason to believe that you had good focus and a reasonably well exposed image. If anything you could use more frames to even out the issues caused by seeing and the atmosphere. The more images you can bother taking the better it will be, although with this sort of resolution around 200 or so will probably be close to the best it can be, if the conditions are average. Its a good shot! If you take some more frames you'll definitely see improvements in the lit-side detail. If you want to take something away from this: Try applying some of the Lightroom edits on the picture, especially sharpening. Sharpening really does wonders on Lunar full-disk shots!
  19. I do have a 55-250mm kit lens and a Canon 550D, would this work for the widefield template or is it a better idea to take a centered frame with the main imaging scope?
  20. I agree that i prefer Andromeda to be a bit diagonal. Its a weird opinion since what points to what direction is completely arbitrary and all opinions and orientations are "correct". This is however not a problem since i will just rotate the entire imaging train to 90 degrees to reach this orientation. Something like this is my plan. It could use a few more panels but if i get this to work and all frames to be more or less on point it would be good for me. Local weather is dreadful and i don't have a backyard, so 10+ hour projects are not something im looking for right now. Astro pixel processor does look pretty nice. Even comes with a free trial and a yearly rent period. I am aware of pixinsight too but that software looks like it was written by aliens, for aliens, with alien language. Complete gibberish to me when I've tried looking at some tutorials.
  21. Never heard of the effect before, thanks for the heads up. I think this is the reason my previous shots have all had some sort of field rotation if i shot on different days and didn't quite nail the framing right.
  22. I should have maybe mentioned that i am using an EQM-35PRO, which is not a very nice match (read:nightmare) for the 8inch newton. I will not be going over 60s subs in any scenario and preferably i would go for shorter. I do have a new camera that is still yet to see its first light, but it has 16bit adc, 80%+ QE, 14 stops of dynamic range and practically no noise so im hoping 30s or even shorter exposures bring out the halo at least to some extent. What i was wandering how long the integration time should be per panel with a 2x2, 3x3 or even a 4x4 bin in the final combined picture. I will need at least 3 panels, preferably 4 to get roughly the field of view as in yours. Yours looks fantastic with the well captured halo much farther out than i usually expect to see, would be happy for a half as good capture as that!
  23. The only target that doesn't fit my setups field of view that im interested in (200mm F4.2 newton, aps-c sensor) is the Andromeda galaxy, which i obviously want to image if the clouds ever go away. I am using NINA so creating the mosaic itself is nothing but a click away in the framing tool, but i have no idea about the specifics, for example how much should i overlap the frames? My scope being a newtonian will probably have some not-quite-corrected coma and other tracking artifacts at the edges so i assume 10% is not enough. The other unknown is exposure time per panel. In one hand i think i could get away with as low as 30 minutes per panel since i will definitely bin the final picture anyway but is that enough? Ideally i would be able to shoot this in one night to get the conditions as close to eachother as possible per panel, or is this overthinking it? I would like to plan ahead and not spend valuable time outside tinkering with the details. As for the processing part of actually combining the panels is a complete unknown for me, i know some software can do this but are there any recommendations from people who have done the same? I assume i would roughly process each panel first and then do the combining. The exposure time per panel problem of course goes away if some software can equalize different gradients and background levels from different sessions.
  24. Im happy you like it! Not sure i understand the 3D-aspect fully, but from the failed versions of the same data set i have processed i think i can sort of get what you mean. Its very easy to "deep fry" an image of M101 with light pollution in the mix and getting the right mix of even background and the faintest spiral structures took too many tries for me. At least the lower part of the spiral was very easy to process out when trying to "fix" other parts of the image.
  25. Consider buying a camera outside of the usual suspects of manufacturers. I bought the RisingCam IMX571 sensor colour camera and it works great. Obviously its been cloudy for weeks since i bought it but lots of people are happy with theirs and so am i, and the price is really competitive compared to the monopoly-gang manufacturers like ZWO and QHY who have just decided to ask 2200 euros for these products for no real reasons other than they can. https://www.aliexpress.com/item/4001359313736.html?spm=a2g0s.9042311.0.0.5b604c4dWaHMUL
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.