Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

ONIKKINEN

Members
  • Posts

    2,385
  • Joined

  • Last visited

  • Days Won

    5

Everything posted by ONIKKINEN

  1. The NINA discord server would have an answer to that in case no-one here does. Would be surprised if someone here doesnt know though.
  2. Dont know anything about the INDIGO server thing, but from your NINA, PHD2 and Stellarium list i would drop Stellarium competely. I think its another possible point of software failure and not all that needed when NINA is up and running. NINAs Sky atlas contains practically every target imaginable, and those that it doesnt can be slewed to by input coordinates. If you download the offline image catalogue (couple of GB) you have a very convenient alternative to Stellarium built in to NINA, and can slew to any point from there (and easily create a sequence). Pointing and slewing can all be done with NINA and Astap platesolving, so i think there is no need for an external planetarium software with that.
  3. I used a 120MM before, i switched because it because i thought it developed an issue (seems more like USB issues in hindsight, cable probably), and because it wasn't quite good enough to guide trough an Antlia Triband filter with short guide exposures. With the 120MM i had some issues finding suitable guide stars trough the filter, and had to use longer exposures than ideal for my AZ-EQ6 and its rough RA worm with 2 sharp peaks per period which really needs less than 5s exposures to deal with. With the 220MM its been smooth sailing with the Triband or with UV/IR. Never rotated the camera in the OAG either, since there is no point with how every field of view has at least a few stars to guide on.
  4. I have had issues with chasing seeing with my 220MM in an OAG at 1018mm focal length f/5 scope even when there were multiple stars. Typically there are a good number of stars too, like 7 or 8 so multi-star guiding has not fully gotten rid of the issue. Pushing exposures to 4s and above does solve that issue somewhat, but in the cases where this happened the seeing was just so bad that the improvement was of little value. But on the topic of guide sensors, the 220MM seems like a really good guide camera. 4 micron pixels, which im running binned x2, high QE, low read noise. SNR of a guide star is always very high, in the thousands or at least hundreds.
  5. I think you might have pushed the data a little bit too hard in terms of denoising or stretch (or both). Could use more data too of course, 3 hours is really not that much. Since you have noiseXterminator i would rather use that than any other method of denoising like the TGV denoise you applied here. Try using only noiseXT with the same masks you did with TGV denoise? Use a value lower than full power, like 50% or so. Too much will turn the background into a painting which will also look blotchy since some detail that should have stayed as noise will have been evened out by noiseXT and will look unnatural. But I'd say its not too big of a deal here, the uncropped image is not at all blotchy looking if you dont go looking for it. Image is not too bad to my eyes, would probably not have noticed any blotchiness if you had not mentioned it.
  6. In good seeing and no wind my AZ-EQ6 is hovering right around 0.5-0.7, but in worse seeing it is probably closer to 1''. If its windy anything goes, the worst night where i didn't immediately pack up my stuff and head back home defeated landed me in almost 3'' rms for the night. That was in 12m/s winds though, which is probably something i wont try again. If you are frequently having 2'' guiding with your EQ6 when conditions aren't completely busted, then i think you have some issues in the mount itself. The fact that you are having stretches of better and worse guiding suggests that something is not ok in the innards and something can be improved. Time for a strip down and new bearings?
  7. Processing looks pretty much perfect to my eyes. Star colours, the nebula, the dark dust, its all there. Its a stunning image!
  8. Apparently the 571 sensor has a large number of hot pixels - at least in comparison to some other sensor models. There was a mention or two regarding this in a thread here: Dont have the QHY version, but the RisingCam OSC version of the 571. Just tested my 240s x 50 dark stack and with 2 sigma i have more than 34k hot pixels some of which fully saturated. Looks like mine is not too different from yours in that sense.
  9. The 585 seems like the ideal sensor for this kind of thing. There is a lengthy thread here : The 2 micron pixels in the 678 are probably also not too useful for DSO work. Maybe with some very fast systems, but seems a bit too high a resolution for DSO work. Not that there is a huge difference to the 2.9 micron of the 585, but every bit helps.
  10. Surprisingly dusty, nice one. Dont see this one too often in a broader palette, usually just narrowband.
  11. I use that for planetary and lunar, good camera for that. But my sensor analysis run with the camera suggests that it might not be the ideal choice for DSO because of how the sensor is laid out for its intended use of lucky imaging. Linearity to 82,9% which isn't terrible but could be better, not sure how this will effect flats for example (never taken with mine, no need with planetary and lunar). I think the other new gen uncooled cameras would be a better choice for DSO work. Read noise also stays quite high at low gain, which would not be ideal for longer exposures. For planetary and lunar, no objections so far from me.
  12. Im afraid i have no video to link as ive not seen one for the subject. Im using a workflow that @ollypenrice shared here on the lounge in a thread not too long ago. Cant find that thread now though, but the basic gist of it is: Process both the RGB image and whatever image you're using for an Ha image (like a duoband, L-extreme, or whatever you are using) to a point where they have a similar level of stretch and open both as layers in Photoshop. Put the narrowband image in the blend mode "lighten" and create a layer mask for that. Create the layer mask in a way that only the interesting parts get put through, for example by selecting with the color range tool or by simply copying the image to the layer mask. Just tried this with GIMP and a similar workflow does work. You can put the narrowband image on top of the RGB image as its own layer and put that to the mode "lighten only", which seems to be the exact same as Lighten in Photoshop. From the Select menu choose by color and click on the background to choose that color. Then invert that selection and you are left with a selection that includes everything but the background - so all the interesting parts you want to have. Now you can easily adjust the strength of the process by either modifying the layer mask itself (by clicking on it with alt+left click) or by modifying the narrowband image itself, by curves, contrast, levels, so all the usual stuff. Seems to be fairly straight forward. Definitely more so than the pixel math route which often does not do what you want it to. For an even better result you could separate the stars from the image with Starnet, and apply the narrowband only to the starless versions of both images and then recombine later with just the RGB stars.
  13. For calibration, you can still use the stock script, or modify it a little bit. You can find the scripts in the scripts folder of your Siril installation folder, for my installation it is in C:\Program Files\SiriL and is probably for you too. The .SSF files can be opened and modified with word pad. The simplest way to modify is to just remove the lines at the end, like below: Just remove those lines and the script ends with debayered and calibrated files with the prefix pp_light for each file. Move those files to another folder and then just delete all of the other files in the working directory. They are temporary files so nothing lost (just make sure you dont delete your actual raw data, keep it elsewhere) and then re-do the script with another dataset with its own calibration frames. After calibrating any number of images from any number of nights, you will need to now stack manually. Its not too difficult to do, i think its just poorly explained (if at all) by Siril and is pretty confusing for someone using Siril for the first (or tenth) time. What you need to do is import all the files you want to stack into the "Conversion" tab. You dont have to worry about calibration anymore since you already calibrated all the data with the script before. Then in the "Registration" tab you will register (star align) the images, you can use the "Global Star Alignment (deep-sky)" method. Leave the algorithm as Lanczos-4 and interpolation clamping ticked. After registration completes you can now stack the data in the stacking tab. I would recommend the below settings for stacking: Now for mixing Ha to an RGB image (assuming you are planning on shooting more than Lum and Ha that is) is not so simple. In Siril, there is no couple-click tool solution for that at the moment, but there are some ways to do that. Mixing Ha to RGB is not too difficult in a general purpose processing suite, like Photoshop but not sure you have it? If you wanted to only use Siril, you would first stack all of your individual filter datasets to their own stacks and then combine the red and Ha stacks with the pixel math tool. If you are working on OSC data, extract the red from the RGB stack. You would first need to register the 2 together, and then linear fit the Ha stack to the R stack to equalize their levels. Then you can use the "max (x,y)" function to combine the 2 images so that only the pixel maximum values from both is left, which will mean that the regions of the image where Ha is stronger than broadband Red will get boosted. This will also boost your noise because the Ha stack will be much noisier in the background, so this is far from a one click thing. You can then blend that to the original Red channel by for example stacking with average and no rejection. If the pixel math wizardry sounds like a bite too much you can also try a simpler 50/50 blend by just stacking the red stack to the Ha stack, which when average stacking with no rejection is chosen will just take the average of the 2 images. This will hurt broadband, and boost Ha. Probably the simplest method in Siril. Really not sure i would recommend mixing Ha to RGB in Siril for a beginner. Actually having done that a few times myself i still dont want to use that, because there are simpler tools (like Photoshop and PixInsight, but both costly of course). I'd be happy to walk you through the manual stacking process if there are some head-scratching moments when/if you do decide to try it. The initial hurdle of abandoning the automatic script stacking is the biggest step, but once you get used to doing that it becomes not too cumbersome to do.
  14. Looks pretty cool to me. Sorry, had to do it 🙂. That's a very nice image, and presumably taken in warm Texas conditions too so i think you've proven the point that uncooled is not so evil as it used to be.
  15. Very impressive galaxy fields from bortle 7! Great detail on all of them too, what a nice set of images.
  16. M33? Plenty of broadband signal with also emission nebulae that would pass with the filter.
  17. Thank you, TypeCat is just the tool i wanted for the job. PixInsight threw me trough an update loop of course but all sorted now. Now i see that these things are everywhere, never would have guessed there are so many that can be found.
  18. Whats the best tool to use in finding these distant globulars from images? Interested in going on the hunt myself, but at least the toolset in my use doesn't have obvious globular annotation capabilities.
  19. For different datasets you need another approach. You can stack them to one image if you want to, but not by using the stock script, or at least not in a way that properly calibrates everything. What you need to do is to calibrate all the different datasets with their own matching sets of calibration data and then stack it all manually afterwards. You can use the stock script for calibration, since it blurts out a bunch of files with the prefix pp_light. These are the calibrated files you want to save to another directory for later use (you can delete the rest to save space for now). Do this for any number of nights worth of data and then manually import all of these into a new sequence, then register them and stack in the end. You can also modify the stock script by simply removing the lines towards the end of the script that register and stack the images. This way you dont have to sit through the registration and stacking phases if all you want is to calibrate. Or like above, use sirilic which will do anything you want it to but without the manual directory and file moving work. But you probably shouldnt stack Ha and Lum to one image if that was what you were planning on, this was just a tip on multiple night/different exposure type work.
  20. I cant tell from this image what shape the corner stars are, so at least its not obviously wrong. Instead of backspacing it could also be just the normal loss of sharpness far away from the center of the field of view that happens in almost every imaging corrector. See if you can find a spot diagram for your corrector to confirm that?
  21. Nice one! Dont see this one imaged too often. Was planning on imaging this if i finish my M33 and Stephan's quintet this year, but now i think i dont have to since you've done a great job at it. Not that i probably would have had the time to try this year anyway 😉.
  22. With regards to CA, i use the new version of the Baader solar continuum filter on Jupiter and the Moon (not for Saturn, too faint) with my 90mm f/5.5 Long perng semiachromat. Makes it a very faint disk but i find it makes the bands really pop out and i see more than with no filter or a Baader fringe killer. Really it has become my go-to filter for Jupiter observing with the 90mm. Try it out some night? Might be surprised by the improvement in contrast with your 120mm achro.
  23. My solution for the Windows update issue is that i have never connected my Mini-PC to the internet and dont plan to do so because really i dont need to update anything on it, other than software that i can just transfer with a USB stick. A travel router creates a dummy wifi network that i connect to via remote desktop, so no actual internet connection is needed at any point.
  24. SCNR green seems to work in reducing the greenish background and makes many of the OIII specks pop out a bit more in the image, but it makes a fairly blue background at least on this JPEG. Try running SCNR green at an earlier stage in the process, such as before stretching or after a first mild stretch? Dont know if you have Photoshop but that's where i would finalize the colourpalette by adjusting the levels close to the end of the process. You could also attempt to boost the blue channel with a curve to give it more oomph if you want to.
  25. Some excellent quality datasets can be downloaded in the data release threads here: https://stargazerslounge.com/forum/294-iki-observatory/
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.