Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

ONIKKINEN

Members
  • Posts

    2,422
  • Joined

  • Last visited

  • Days Won

    5

Everything posted by ONIKKINEN

  1. Yes, terrible seeing. But its terrible enough that im thinking its not just the atmospheric seeing. Was the scope cooled down at all? This is what my captures look if the scope is not cooled at all and i just begin imaging straight after setting up. If you were shooting Jupiter with it being low in the sky, and especially if it is over some rooftops of nearby buildings, you will get boiling views like this.
  2. What about without star removal? These kind of look like artifacts that early versions of starnet++ produced when the kernel/cell (forget what its called) size was improper, although didn't see banding exactly like this so a long shot.
  3. Ah, well this part is the truly difficult part isn't it? I think i have had 5 nights in 2 years when the Moon is high and seeing is decent, and especially, i had the appropriate kit to use that (0 times o far!) Anyway, i dont want to sound like its all seeing, Neil, its definitely you that makes a lot of these images pop out like they were popping out my eye.
  4. Superb detail Neil 👍. Doesn't look sharpened at all, but i assume it is so looks just about what i think perfect processing is. How high was the moon at during capture and did you use a specific filter with the mono camera? I am asking because i think i have a pretty good mirror in my 8 inch (quoted as 0.015 wave RMS by OOUK zygo) newt and have not been able to take nearly as quasi stellar shot like this, but admittebly so far the Moon has been only below 30 degrees for me so far. But you are the king of this kind of imaging as far as i can tell, so probably some other magic at play as well. Anyway, excellent image!
  5. Im curious, how detectable are these with the 15x56mm bins? With my 7x50mm i admit i have seen at best kind of hallucinations of these. They are very small and something i could only call "perhaps seen" kind of objects. I am sure i have seen them, but just not sure at all how to describe the seen part as they are so ghostly. Just about detectable in SQM 21.2 skies. I admit they have been handheld, and i think this is a major issue with bins.
  6. I think we have different definitions of boring... Accidental IFN capture? Definitely not boring. Maybe visually not that striking, but fairly interesting still. I do wonder how dramatic the structures would become without the NBZ filter though, as if its just IFN there should be loads more signal without the filter.
  7. The processing part takes time to understand enough for it to start making sense. You'll develop a standard workflow at some point and then its just another hurdle in the hobby, but from what i can tell the development of processing skills doesn't stop and i tend to dislike images i processed just a few months ago. But going back to them and seeing if it could be improved is also a great way to develop the skills. Im guessing the more experienced folks here have been on the processing grind for decades, and it shows. I am definitely not an expert on processing, but if you want some clues to follow then here is what i did to this image, in Siril and photoshop. Siril: crop stacking artifacts out (funny corners that have not received the full integration) Banding reduction (not necessary if there is no banding) Background extraction, which is easy to work with since the object of interest is small and in the center in this case Colour calibration. I used manual colour calibration with the core of M31 as the white point source. This way the image turns a bit too blue, but i adjusted it later. Photometric colour calibration works 95% of the times, this was one of the times where i felt it did not. Remove green noise (also not always necessary) Stretch, first with Asinh transformation then with histogram transformation until the galaxy OR stars look nice, often not possible to have both. The part that does not look nice can be adjusted later with starless processing. Points 1, 3, 4 and 6, in that order, is what gets any image 90% done and the rest is icing on the cake. Try it out in Siril! Although in this case banding reduction was necessary, but i would try to do that before stacking and registration on the calibrated files. Photoshop: Create starless image using StarXterminator (paid plugin, you can also use the free Starnet++V2, which is as good or better, they are competing and not always clear which is better) Create stars-only image by subtracting the starless image from the original. De-stretch the stars-only image a little bit to "mute" them a bit and bring the galaxy forward. Also applied a little sharpening to them, to make them a little bit tighter. Select the background of the starless image with the color range tool and desaturate it. This is a very effective way to reduce noise visually, without actually doing any denoising. Adjust the colourbalance and saturation of both layers to what i think looks close to realistic, while also being visually pleasing. This part is kind of subjective and you'll get 5 different opinions from 4 different people if you ask what is correct. Merge layers using using the "screen" mode and done. Thought it might be interesting to compare the just Siril processed image and the final version. The Siril part (left) of the processing often takes just a few minutes and gets the image 90% done, the photoshop part for the final 10%, well its up to you how long to spend on it, but often i find that almost all of the time i spend processing is in that part.
  8. Ran it through Siril and PS too, this is what i got out of it: You have captured some pretty nice data for a beginner, you have done well!
  9. Looks like typical Canon banding, but just a lot more aggressive than i ever saw with my 550D. I suppose these cameras have individual differences in their sensors so maybe not that strange. But yes, dithering more will smear that out and make it essentially disappear in the end. You want to dither at least 20 pixels, in your case maybe 30 or more since the pattern is so bold. Try to dither at minimum once per 5 min.
  10. Thanks for posting, im not getting any data of my own due to weather so will take any excuse to work with loaned data! Data looks really nice, I came up with this: Half-baked wavelets in Registax, then Imppg lucy-richardson deconvolution and unsharp mask and export as synthetic luminance (imppg turns everything mono). Combine in PS with the imppg version as luminance and the half done wavelets version as colour layer. Some further sharpening for the lum and saturation for the RGB layer. Topaz low light sharpening and denoise at 1 and 1 faded with original by 50% in the end.
  11. Yep, dust motes. They calibrate out and will not show up in the image afterwards, so nothing to worry about in that front. The uneven vignetting thing is indicative of collimation error, or at least some kind of de-centering (focuser also often the culprit) of the sensor. Up to you to decide whether to tackle that, but if stars are small and round i would not bother. BTW here is one of my flats, which also has plenty of prominent dust donuts none of which remained in the calibrated lightframe afterwards: You can try to clean your glass surfaces thoroughly, but its more difficult than you would imagine and ultimately i would say pointless, since they all calibrate out.
  12. USB- powered dew strips and any powerbank to power them are also quite low effort and easy ways to keep dew off optics. Not very expensive either, if you already have a powerbank you could use.
  13. The donut shaped dim shadows would be dust motes somewhere close to the sensor, probably on the sensor window, filter if any was used or the last lens of your corrector. Any further away from the sensor than that and dust on surfaces becomes mostly meaningless. I assume that this is a stretched image and the corners are not actually 0-value? That would be really weird if they were and you have some serious obstructions to the lightpath (or use a sensor too big for the corrector). If stretched, its just vignetting which appears to be slightly off center, but i wouldn't worry about that. With a newtonian you have 2 mirrors to align and possible mechanical issues that can bring the most illuminated area off center slightly even with tiny errors in the alignment. If star shapes are good enough, i would say dont bother trying to fix the off-center vignetting and live with it.
  14. Yes, i think a bit of saturation makes Lunar images a little bit more visually pleasing.
  15. A compromise between the two would probably be better than either, but closer to the second one than the denoised one. Try running the low light AI with denoise and sharpening both at 1, and then fade that further down in photoshop later. The tool is very aggressive for astrophotography, often i find the best result is lowlight with 1 and 1 and still faded down to 50% or so afterwards. Very subtle difference at that point but it can be an improvement.
  16. ASTAP image inspector on a stretched frame: To me it looks like all corners and center of the image is aberrated with what looks like astigmatism. I would say the scope is in dire need of recollimation, and not by a small amount. These are kind of the same thing in my opinion, collimation is the act of pointing the center of your camera towards the center of your primary mirror, and the other way around and when your focuser sags, the camera points off axis (bringing the scope out of collimation). If you cant make the focuser behave and be stable at all orientations throughout all of the night, you just cant have repeatable good collimation so maybe look into that if your collimation looks ok otherwise. But judging from the image i would say you have both an actual collimation error of the adjustment screws and some mechanical issues elsewhere.
  17. Why not loop 2 second exposures with NINA? You will want to have longer than liveview exposures anyway since seeing is mostly random below 2s making fine focusing very difficult when you fight against the randomness of seeing. Depending on your scope and filter you may need to take longer exposures though. But generally focusing manually is easiest with looped few-second exposures and looking at the reported HFR values. Try to get the lowest possible HFR value and when you do, its the best focus.
  18. If the goal was to do a bit of both, the 585 would probably be the better choice based on the specs (bigger sensor too, for DSO stuff seems like it would be the deal maker and breaker here). The 2 micron pixels in the 678MC are really not ideal for DSO stuff, although its not a disaster as you could just bin on camera when doing EEVA for DSOs. For planetary either camera will do the trick if they are matched with the right scope/barlow combination. If we use the critical sampling formula for lucky imaging: f_ratio=2/wavelength x pixel size (both in microns where 0.5 microns for green light can be used to approximate an RGB capture) comes out to f_ratio=4x pixel size. so an f/8 scope will be natively "ideal" with the 678. f/10 scopes are pretty close too, but might be oversampling a little bit. The 585 needs f/11.6 to be ideal, which is also easy to reach with many scopes using a 2x barlow with a short nosepiece to control the barlow strength. So i think either will do the trick given the right kit, but the 585 is more versatile for sure. Just for planetary/lunar, the 678 is very convenient for natively f/8-f/10 scopes or the typical f/4-f/5 newtonian with a barlow. Choices choices!
  19. The flange distance for a DSLR is 44mm+ t-ring so around 55mm, while with the 294 it will be (i think) 17.5mm + the thickness of whatever nosepiece you put on it (few mm more). So nothing is wrong here, the sensor of a DSLR just sits much further back from the connection to the scope than a dedicated astro cam.
  20. NIce shot! These eternally cloudy skies make the idea of rented remote imaging very tempting.
  21. In the TopazAI window you have open. If you end up using TopazAI for astrophotography, then i recommend the low light mode with very small adjustments. Registax6 wavelet sharpening is what folks usually use to sharpen their planetary images, its worth looking into. Fair warning though, the sliders hardly make any sense at first and you will probably feel a bit lost when trying that out. You could also tick the sharpened box when stacking in AS!3 which will save both a raw image and a sharpened image straight out the stacker which will make life a bit easier. The sharpened image is often easier to work with and will require less adjustments than the raw one, but from the raw one you could extract more if you were experienced enough (often times i am not, so might be a wrong guess).
  22. Looks great for a first shot, i shall not post mine to spare me from the embarrassment 😁. Try the Low Light AI mode, that is often times the most effective with astrophotography. I find that mode is the least temperamental of the bunch when used with low values (less than 5 often more than enough).
  23. Here is what i got out of it with ASTAP (binx2), SiriL, and Photoshop including StarXterminator and Topaz: Had fun processing it, seems like good data and all around easy to work with 👍. Not very experienced in working with nebulae and non broadband captures so i ended up taking some liberties in colour processing somewhere along the process.
  24. Have taken plenty of images of Jupiter not as detailed as yours with a bigger scope, so its a pretty good picture in my book. If you are stacking with Autostakkert!3 the difference in actual resolution between mono and OSC does not apply, or in other terms only applies to SNR but not the actual resolving capability. AS!3 uses bayer drizzle to plug the gaps between the pixels which works well for planetary because of mostly random seeing making sure that pretty much every frame is well dithered at the single pixel or subpixel level and the fact that the end result will likely still have hundreds or thousands of frames to stack = no shortage of SNR. The ideal f/ratio for lucky imaging uses the formula: f_ratio=pixelsize x4 so f/11.6 for 2.9 micron pixels which is pretty close to the f/10 the scope is now. The 2x barlow would go way too far, but in theory might capture a bit more than your current setup. But i would say seeing this time was not the best it could be and the focal length is not to blame for the end result this time and so the barlow would probably not help.
  25. AstoimageJ seems to have the most realistic FWHM measurements of all the software i have used. Examples below: Multiple stars seeing profile: Single star example 1: Single star example 2: On the 2 latter examples you can see that the measurement is not that of a focused star. The purple line should reach a point (or just continue upwards) at the top but here it starts to dim down a bit before reaching the true center of the star which means the star is hollow: not in focus.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.