Jump to content

SkySurveyBanner.jpg.21855908fce40597655603b6c9af720d.jpg

ONIKKINEN

Members
  • Posts

    2,381
  • Joined

  • Last visited

  • Days Won

    5

Everything posted by ONIKKINEN

  1. Gradient from the sky, from both light pollution and Moonlight but in your case at bortle 4 the 60% Moon is a larger contributor to this than the light pollution Background extraction will get rid of it, but i think you already got that since the gradient is gone in the processed image.
  2. Smart sharpen or unsharp mask in Photoshop would be my first choices, applied with masks on the starless layer where you want them, so you can ignore the background that has nothing to sharpen and will only gain noise from this. You may also want to try that for the star layer, if you would like your stars to be more hard (matter of taste i think, although i like yours). Something like 1.5-->2 px width for the filter and then dial it in with the percentage Texture and clarity in camera raw or lightroom is more of a contrast tool, not exactly the same as sharpening but you might want to try that too and see how it changes the image. There should be some detail to the very bright core if done right - but it is tricky to do right i will say and most images have the core saturated. I have shot M42 only twice, and i think both times the images sucked and yours is better but this is what the core might look like if it were not blown: As for how to blend that with the main image you'll need to get creative. You could use the blend mode darken for example, and tweak the core layer in a way that it blends nicely to the surroundings. Another way would be with layer masks - or by simply deleting the core with the color range tool and having the unblown layer below. Many ways to skin the cat, but all will need some experimentation.
  3. First proper image, stacked from 3 nights, new to the hobby, did i read that right? 😲. I think i should ask you some advice on how to do all this! Seriously, you've skipped like a year or two of an average imagers learning path - quite impressive. The image is spectacular for a first shot like this (and actually its a good image even if it were your 50th one), i really cant find anything specific to critique about it. Couple of small things that could be improved, but no big deal. The core is blown, like it is often in M42, you could try to take some shorter exposures just for the core and mix them in with this one to recover the saturated parts. If your raw lights are not saturated then you can recover the core by simply not stretching so hard, but you'll likely still need to play around with layers to blend the core to the rest of the image. And maybe the nebulosity could use a touch of sharpening, but this is often a matter of taste so up to you.
  4. The credit for the tight stars in the cluster should go to BlurXterminator for sure. The data isn't at all special at roughly 3.6'' fwhm. Pre and post BXT below: Not used BXT for globulars before, but looks like the tool is perfect for them based on this short image.
  5. The Globular cluster M3, 120x60s with an 8'' newtonian: Taken under a full Moon on Thursday night under decent seeing but a little windy, although BXT hides the issues from this issue so no major issues with that. Flats did not work for this one, i suspect because of light leaks from the full Moon and the fact that i had set up on clear ice (alternative was to shovel half a meter of snow, no thank you) which would act like a mirror shining the very bright Moonlight through the bottom of the scope. The bottom was covered with a spandex/fabric type dust cap which is a little bit transparent, and i guess a bit too much light found its way through to ruin the flats. GraXpert rescued the image in the end and content aware fill removed some dust spots that remained. Processing in PI and Photoshop, overall very simple to process apart from the bad flats trickery. -Oskari
  6. This is an excellent image, to my eyes at least. Saturation and colour seems to be in a good place, i really like the vibrant blues you have here. But more importantly i think saturation and colour especially with narrowband is a matter of taste and opinions will differ on that between different people. Cant please everyone, so try to please yourself when processing and work with that.
  7. Indeed, i for one am very interested in how it works out in the end. Mine has seen 3 winters and has been rained on twice, and is generally always used outside the recommended 80% max humidity mark so i am expecting trouble one of these days.
  8. That sucks, may have been a simple fix to just swap out the boards. But i also understand the other side a little bit. Surely they are also interested in what exactly went wrong with yours and would like to inspect the unit, and like you said if the problem doesnt go away or comes back later (such as if the pcb was a symptom, not the source of the issue) then there is a chance of more backlash for customer support. Used to work as a car mechanic and this kind of situation was a minefield to navigate around. Some customers would state that they have diagnosed part x and y to be at fault for some electrical problem and didnt want to pay for diagnostics. Lo and behold the parts were not the source of the problem and now they want refunds. Moral of the story is that often the best service customer support can offer is no support for self diagnosed issues, at least in the long term. Sucks for everyone, but statistically sucks less for the manufacturer/supplier.
  9. Another version, with a manual colour calibration this time: Its wasn't quite as straight forward as simply SCNR the green away and call it a day (skill issue, most likely) to get the yellow flame but here it is anyway.
  10. It would look like the usual flame with a manual colour calibration + SCNR, i know because i did that a year ago when i didnt yet have PI. Will post a version sans SPCC later today or tomorrow. But i think the lack of yellow is "real" (whatever real means with narrowband imaging) because the filter does not really pass any yellow. The bandpasses are roughly 35nm in the deep blue, OIII, and Ha/SII so no yellow or orange is passed.
  11. Thanks, its the filter + SPCC in PixInsight that leads to the red flame, at least i think it is. Didn't do anything specific to the red channel that would give a red look to things that aren't supposed to look red (like a red curve etc). Interestingly, the flame is more yellow before colour calibration takes place: So i think one could get the "usual" looking palette with a more manual approach to colour calibration instead of SPCC.
  12. Thanks, Binning and resampling to 3.6''/px helps a lot and NoiseXterminator took care of the rest. I apply NXT with masks and layers to where i think its needed most at several different points in the process including before, between, and after stretching. Thank you, i think i see what you mean. I often push sharpening too hard and only spot it a day or two later when my eyes are "used to" looking at the new image. I softened some parts a bit to take the edge off. Doesn't look worse at least to my eyes so probably an improvement.
  13. The Horsehead and Flame nebulae shot with an 8'' newtonian, a Rising Cam IMX571 OSC camera and an Antlia Triband RGB Ultra filter: 42min per panel to a total of 2h 48min over 3 nights, first of which was in December 2022 and the 2 others in the past few months with one of them resulting in just 6 minutes per panel due to 12m/s winds throwing the scope around - this is a tricky target to image from 60 degrees of latitude so data trickles in at a pace of an hour per year it seems. Be lucky to reach my 10h target by the end of the decade! Binned x4 and further reduced by resampling to 80% close to the end of the process. Mosaicing by PixInsight with BXT ran in correct only mode before (works flawlessly, absolutely pixel perfect mosaic!), the rest of the process mostly in Photoshop. Comments, critique welcome. -Oskari
  14. I used to be quite stressed by the idea of going out to image, but repetition and more experience took that away. Also planning for every possible situation so that i dont have to do any headscratching on site. I have a primary target, a secondary target, a bad seeing/high wind target, a good seeing target, and many "reserve" targets that i have already created sequences of so there is pretty much no chance that i have to decide anything on the go. Im pretty sure i have more than a hundred dark site trips by now, so there are hardly any new obstacles to cross, or at least no shocking surprises to ruin a night. Weather is the one thing i cant help and its just a fact that needs accepting that 10-20% of all trips are a waste of time. So, planning and experience is what helped me with the anxiety.
  15. Both are amazing, but i think i very slightly prefer the second one.
  16. I dither once every 20 minutes at the moment with an IMX571 OSC camera which is very clean to begin with so there is little need for dithering. Calibration with a bad pixel map and a matching dark takes all the hot pixels out and so far seems to work well. Used to dither more often, but it just takes too much time off imaging with how long my mount usually takes to settle down afterwards so have tried to reduce it. If the camera used is not so clean, like DSLRs and some other older models with some fixed pattern noise then i think one should dither more often (dont know about the 183 if its the case). Also if there is significant cone error or polar alignment error (resulting in field rotation) then dithering might be a good idea to break the pattern that can emerge from that. Try dithering say once every 10 minutes and go towards either direction from there if there are issues with walking noise or some other pattern noise emerging.
  17. There is another option to binning that also reduces sampling rate by a factor of 2 with OSC cameras, the CFA split method of handling data. With CFA splitting you split each raw calibrated but not debayered sub into their 4 source channels: 1 red channel, 2 green channels and 1 blue channel (for normal RGGB/GRBG bayer matrix cameras - almost all of them). This way you reduce the load when stacking to only 1/4th of RGB stacking for the red and blue channels, and half for green. The total time required to stack stays the same since you will be doing 3 stacks to get a composited RGB image, but if your PC is struggling and you are going to be binning x2 anyway then this might be a worthwhile option.
  18. The target will be about a milliarcsecond in apparent diameter, so will appear like any other dim star. But from a magnitude point of view i dont see why not, just need to spend some time on it. Plenty of magnitude 19 stars can be seen in an image with some time spent on it, and i think my magnitude record is somewhere in the 22 ballpark, but that took over 30 hours from good skies so probably not what you're looking for. If you tried this, you should take as long an exposure as you possibly can and stack without rejection (so that the rejection algorithms dont discard it if/when it moves between subs a little bit but not sure how quickly it would move at this distance).
  19. There is something like this - symbolic links. They create a "virtual" file that takes no space on disk and can in fact exist on another disk so you could save some space, but this step only applies for the initial sequence you create. Registration will still always write new files since the files themselves are modified to perform the registration, so the minimum space required to stack a dataset is: The calibrated subs, on any disk on the PC The same subs registered, in the working directory of Siril. So the minimum space one needs on a single hard drive is just the initial calibrated dataset, or twice that if there is only one drive. If symbolic links are not used then you do need 3x the initial calibrated subs worth of space (calibrated subs, the same subs but created to a sequence in Siril, the registered data).
  20. E-begging seems to be the fate of all Youtubers that start out as interesting hobby channels done for fun. The moment the words "buy my merch" or "subscribe to my Patreon" come out its time to go and never look back. There are exceptions, but most hobby Youtuber turned career Youtubers really drop off a cliff in terms of how enjoyable the content is.
  21. If you have peripherals to spare - a monitor, mouse and keyboard, then a desktop built from parts you select yourself will be the best bang for buck. If not, not so sure as monitors cost an extra couple hundred on top. But for processing i think its worth it to go for a desktop as you can upgrade it at will later by expanding storage or buying extra sticks of RAM. Laptops dont do so well under high load for extended periods of time because of their limitations in getting rid of heat resulting in thermal throttling of the CPU. Not always the case, but often is.
  22. Yes, for the RASA8 which is a much bigger scope and presumably quite a bit more expensive if we are to compare the pricing between a C6 and a C8 and assume a similar difference goes for a potential RASA6 vs RASA8 (pure guesswork of course). The C6 isn't too heavy though and i reckon the RASA 6 version could ride well on more affordable mounts like an EQ5 or Celestron's own AVX, which is why i too think it could be a pretty hot scope flying off the shelves if they sell just the OTA one day.
  23. I think you could use Sirilic for that, but its been a few years since i used it so not so sure on exactly how to do that, might get convoluted with how many nights you have going on here. What i would do is first calibrate both sets from all the different nights first and save them to their own folders somewhere and then stack with Siril manually (also, i feel your pain, i have 3 hours on the horsehead from 3 nights. Sometimes i wonder why even bother with the hobby...). Since everything is calibrated its now as simple as importing all the data to a sequence using the Conversion tab in Siril (just drag and drop there) and then registering it all using global star alignment, inspecting the plot and making rejection choices if you wish, and finally stacking it all. The only choice you have to make here is what dataset you want to register to, and that is decided by the first image imported in to the sequence. So as an example have the first image be from the bigger scope - then when you do global star alignment in the registration tab all the following images in the sequence are transformed to fit that and get its pixel scale. Its important to not do the 2-pass method here, as that can choose some other image as the registration frame so only global alignment will work if you want to force Siril to stack to a specific frame. When stacking it all to one image you get a better and more accurate average of all the input subs - so in theory better signal to noise ratio. Stacking set 1 first and set 2 second and averaging those 2 will be less accurate than just forming one stack out of it all. But since you have rather significantly different scopes and data sets here, im not quite sure what is the best way forward in practice rather than theory. Try the one stack method and compare to the 2 stacks averaged and see if there is a difference. Might not be too noticeable, in which case you could do it either way but my money is on the 1 stack method providing a better result if only by a slight margin.
  24. It sure is incredible. Technology has improved at mach 5 in the past few decades, which makes me wonder where the continued improvements will take us in the future.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.