Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

Robculm

Members
  • Posts

    124
  • Joined

  • Last visited

Everything posted by Robculm

  1. Siril (thanks to your recommendation :-)) for stacking (sorry still didn't learn the process yet, just using the OSC_Preprocessing), then usually green noise removal, photometric colour calibration, background removal & finally the 'autostretch' & save as TIF. GIMP for final editing, with the Pyastro plugins, although to be honest, the GIMP end of my processing is still a bit random! If I'm lucky I get a reasonable starting point from Siril & there's not much to do in GIMP, maybe some saturation, purge red sky, contrast & export as JPG... I tried doing a heavy gaussian blur on the M13 image & subtracting a little of that from the main layer, but typically with GIMP, the more I do the worse the result...
  2. Thanks. So how do you keep the centre brightness down / keep the detail? Is that in processing? Or it's like that on the raw data? 🤔
  3. Hi Alacant, Can I ask what exposure length you were using? Thanks, Rob
  4. Okay okay, I know it's not a very original target, but I'm back to unguided until I fix or replace my laptop & it was such a fantastic rare clear night (Thursday), I couldn't resist a go at something! Captured about 4h of data (2min exposures at ISO800), but after deleting the worst ones (tracking errors), this is around 2h of data. As always with globular clusters, I struggle not to blow out the centre. Should maybe have used a shorter exposure or lower ISO? Tried to compensate a little in GIMP, but any more & I end up creating a large black hole the the centre! Anyway, it's my best globular cluster so far & was fun to image / see like this as it was always a favourite 'visual' target of mine in the past (mainly as it was easy to locate!), so nice to see a great deal more than the 'averted vision blur' 🙂
  5. Many thanks for all of your input & suggestions. It's really interesting & as with all aspects of Astrophotography, it appears there are a multitude of different approaches! Longer term, I can certainly see the attraction of the Pi solution (or possibly a ZWO Air Pro? which appears similar unless I'm mistaken?). But I guess I'm struggling with fear of the unknown! Being a newbie & definitely no expert when it comes to 'computers', I'm trying to advance little by little. Getting to grips with the image stacking & processing side has already been quite a challenge. Now moving in to autoguiding... But still very much feel I need to be 'at the telescope', at least until I have a better understanding of how everythings working. I think for me, it's therefore going to be a new budget laptop, get familiar with PHD2 & hopefully bag some decent images, then maybe explore EQMOD or suchlike to start controlling everything from the computer... If I can get the hang of these things through the warmer weather, maybe next winter will be a nudge towards considering a more remote approach!
  6. After a lot of effort 'tuning' the HEQ5 pro & still finding I'm really limited to ~ 1 - 2min unguided photos, 3mins if I'm happy to bin 50% of them (which I'm not!), I've taken the plunge & purchased a ZWO120mm / mini guidscope package :-). Dusted off an old laptop, installed PHD2 & did some test exposures. For the 2nd session I installed ASCOM & experimenting with the guiding assistant etc. Appeared to be making some progress until the high cloud set in. Tried to switch on my laptop the next day, to have a look at the log data etc, to discover that 2 short sessions outdoors has caused the demise of said laptop! 😞 So back to the Title question. What are peoples thoughts on 1. New bottom of the range £200 laptop, for guiding purposes only. 2. Some kind of WiFi hub & running from my 'good' (i.e. not going out in the cold) indoor laptop! Concern with the WiFi option, is that for sure you need to be outside for the initial set up, alignment, focus, triggering the intervalometer (Using a DSLR) etc, so would be needing to take the laptop outside to begin anyway, which kind of seems to defeat the purpose. I'm not looking to add extra 'remote' capability, just looking for the lowest cost option to get started on the guiding... Cheers, Rob
  7. Hi, here's my attempt at the Coma cluster. ISO800, 3min subs, ~ 1h imaging time. I'm struggling with PEC at the moment, so only ended up with 1h of usable images out of about 2.5h 😞 so it's a lacking in as much 'data' as it should have achieved! PEC training on the HEQ5 is a bit of a joke (well it is when you're doing it by eye) to be honest. Looks like I'm going to have to invest in a guide camera at some point... It's a fantastic target though, thanks for highlighting this one. Many years ago I had a favourite poster of a galaxy cluster from the HST. They really are mind blowing huh! Cheers, Rob
  8. I'm driving directly from the SynScan controller, no external apps & no guiding. Initially I did need to spend a lot of time getting things set up, levelling the mount, polar alignment, the longitude / latitude / altitude etc & in particular the 'cone error' caused me problems & took some figuring out. I do always perform the 3 star alignment routine, but everything seems fairly repeatable now, even I dismantle everything between sessions (I have permanently marked the mount position on my patio though!).
  9. Hi AstroMuni, Thanks 🙂. This was with 24 x 180s exposures at ISO400 (started with 30 or so, but deleted out the ones with elongated stars) with x20 each flats, darks & biases. I'm still very much experimenting with different exposures & ISO's etc, but from the handful of sessions to date & lots of single test shots at different settings, this worked out the best combination so far. I'm thinking to try and push the exposure a little longer ( 4 or 5 minutes maybe) in future, now that I've got the Rowan belt mod sorted (assuming that works to fix the star elongation problem). I read that the 'mountain' in the histogram on the camera should be 1/4 - 1/3 in from the left, I've not managed to get any further than about an 1/8th. Although I'd imagine it depends somewhat on how 'bright' the particular target is?! Cheers, Rob
  10. I've seen some references to Sirilic, will definitely explore, thanks. HTH, ha, I chuckled at your politely suggesting I learn how to use the tools properly 😆. You're absolutely right, lack of computer skill is indeed no excuse for laziness! Will endeavour to get a better understanding of the processes. Siril is a great piece of software & seems to cover a significant amount of the whole process, so definitely worth the effort I think... Regarding Startools, actually I did take a look, but almost ashamed to admit that I couldn't get it loaded up properly on my computer! 🙄 ('Hardware guy' as I may have mentioned!!!) I think it was something to do with folder permissions... Anyway, the moon is passing & hopefully some clear skies soon... Will try for the OHC filter imaging on M101 & then I'm keen to switch my attention to some globular clusters. That was always a favourite target for 'visual'. I've made an attempt with M3, next planning some 3 - 4am sessions with M4 & M5! Thanks again, Rob
  11. Hi All, many thanks for your comments & feedback, much appreciated. I do find Siril really nice to work with, but looks like I need to understand it in more depth. So far I've just been putting the lights, darks, flats, biases in to one folder & linking that as the current working directory, then selecting the osc_preprocessing command, which runs & outputs the 'result'. I haven't needed to understand the conversion, sequence, pre-processing, registration tabs! HTH, when you say pre-process both nights (n1 & n2) what does pre-process actually mean? Is it just to sort the lights, darks, flats, biases from each night in to separate folders (n1 & n2)? Put those two folders in to a single folder & select that as 'working'? To be honest, I'm very much a 'hardware' person! On the 'software' side, anything other than 'clicking this or that command' is somewhat challenging!!! 😞 Anyway, for now I'm still holding out for that n2! Hopefully in the next week or so now that the moon is getting later & the weather improving... But will try to experiment with some 'dummy files' in the meantime. I am wondering though, is it not better to have two completely separate images so as to edit them differently in different layers? For example to edit the nebulosity separately? Or there's no way to align the two images unless you 'stack' them together? Other quick question was about the file format. As I mentioned, I've been exporting from Siril as 32bit, but can only use the Pyastro plugins in GIMP if I export it as a 16bit. Anyone else seen that problem? Thanks again, Rob
  12. Hi all, Since my previous post getting started with a smartphone (through a 200PDS with HEQ5 pro), I've made several 'upgrades' & spent a lot of time 'trying' to improve processing skills etc, but have several questions on which would very much appreciate any advice / suggestions. Firstly I've progressed to a DSLR (after much deliberation I went with a Canon EOS800d) & so far very happy :-). Have also just completed the Rowan belt upgrade as I was getting elongated stars on quite a lot of frames. Have yet to try it out, what with the weather as it's been! I will attach the latest endevour (M101) (pre-belt mod, but having deleted the bad frames!). First question is, I'd like to repeat this image with an OHC filter, to try & pick up some of the nebulosity in the galaxy. I've found how to add separate image in another layer (in GIMP), but how about alignement? Is there any way of doing this automatically, or have to manually align? Since I would always need to remove the camera in this scenario (to fit the filter), I assume I would need to fully stack / pre-process the two sets separately & then 'merge' them together in GIMP. But the alignment is what's worrying me... 2nd question is really about processing flow. So thanks to previous advice from HTH, I'm using Siril to stack (lights, darks, biases, flats), I then get a bit lost as to what aspects it's best to edit in Siril & if it matters which order you do them in before exporting to GIMP. I'm typically doing photometric colour calibration, green noise remove, autostretch & background remove (which sometimes works well & sometimes doesn't, depending on the particular image!). Then export (was doing in 32bit floating point, but having now just added the pyastro plugins, I find I can only use them if I export as a 16bit!?). In Gimp, I haven't played much with pyastro yet, only just added, but typically I'm fighting with more background removal (gradients, fill with background colour etc etc (I find all the blur & despeckle techniques often mentioned don't work for me, always end up removing lots of data from the image), to be honest I've found that most attempts to get rid of the noise result in loss if image data!) & maybe some saturation & the basic contrast etc. Actually, this particular M101 image didn't need much processing at all in GIMP, so I'm thinking the key is to getting more imaging time (signal) & hopefully minimise noise to start with! I'm using ISO400 which seems good for the 800d & 3min exposures on this one. Previous targets were shorter exposure (various ISO's) & I don't think I was getting enough signal over the noise... Anyway, so really just some advice please on the best process flow with these two software packages & any other suggestions! Many thanks, Rob
  13. Ha, yes, there's certainly a lot going on there, it's definitely a great target to experiment with. After reading some of Roger Clark's (ClarkVision) articles, I guess my colours are all over the place, pink & blue is good, orange is bad! Will need to work on that... Have just ordered a DSLR, so very excited to get started with that in the coming weeks...
  14. Pretty happy with the result now. What do you think? At least there's some colour now & most importantly I can see a definite difference in the final stacked & processed image compared to the original single shot! Thanks again for all of your advice. Cheers, Rob
  15. This is where I'm getting to after more experimenting. I can get a reasonably broad stretch looking at the histogram, but there's still no colour showing in the image! Any idea what I'm doing wrong? Yes, StarTools looks like a good option, will explore that. I see there's also the option in Siril where you can specify what the image is and do photometric colour calibration, but it won't run with this image, I guess again because it's a smartphone, so the data is not as 'linear' as it would be with a DSLR... I'm thinking I should give up on this for now until I get a DSLR, then the raw images should be more representative and it should be easier to follow the various youtube tutorials etc, as I will be starting out with a similar image!!! Cheers, Rob
  16. Ha, I didn't realise it made any difference. But playing around with that, I notice that I've just been pulling down the high point, not touching the mid point... Looks like pulling down the mid point helps keep the detail 🙂 Will play some more with this! I guess this is the problem with making adjustments but not really understanding the concept of what it's doing! Will keep at it... Many thanks, Rob
  17. Hi again, I managed to retrospectively generate some flats & re-stacked everything in Siril, definitely helps :-). I know M42 is tricky because of the high intensity in the centre & I've seen some reference to people blending shorter exposures etc... But on the edit you made, I notice you managed to stretch the image & bring out the colours without blowing out the centre! I'm really struggling with that, as soon as I get a decent width to the colour stretch, I lose more detail around the centre (in particular there are 3 stars you've managed to maintain to the right of the centre). Can I ask how you managed to stretch to that extent without losing those? Many thanks, Rob
  18. Thanks again for the advice. Will work on a better method for the flats & experiment some more in GIMP now that I know the data in the master is OK! Will also explore Darktable & Siril, thanks for those suggestions. Apparently 'prime focus' isn't possible with a smartphone, maybe because it has it's own fixed focus lens. A DSLR is definitely my next priority. Cheers, Rob
  19. Hi Alacant, Fantastic, thanks so much for taking time to look at the data. So it's 'in there', just my lack of processing experience huh! Can you give me a quick summary of what you did? I see you've stretched the colours, but how did you handle the background subtraction? I guess I need to watch more tutorials. If anyone has any good suggestions please let me know. Seems like there are different approaches to processing even within the same software. Thanks again, Rob
  20. Hi All, Many thanks for your comments. I've attached the TIFF, would really appreciate a quick opinion if it's the data itself or just my inability to process it! I've also attached the JPG phone photo for reference. Ha, yes, definitely used the RAW files for DSS, I did initially load all the JPG's as I didn't realise the phone stores both, with the RAW files being in a separate folder. Regarding the 'dark / bias frames', yes, could be, I did only put the dust cap on the telescope so it's possible light was getting in around the eyepiece. If it's the data, then I will put this exercise on hold until I can get myself a DSLR. Was really just trying to get some practice with the smartphone... One thing I have realised is why M42 is suggested as an easy target. It's 'like the moon' compared with other nebulae! I took a similar 30s shot of the Horsehead (IC434) over the weekend & wondered if I'd left the dust cap on! Thanks again, Rob master2.TIF
  21. Hi, I'm only my smartphone through the eyepiece at the moment. It's a Huawei P20pro, so reasonable as far as smartphones go, but for sure it's not equivalent to the DSLR approach... I took about 15 images + similar quantity of darks, flats & bias shots. As I mentioned, the flats were suspect, but the darks (same ISO / exposure with dust cover on) & the bias (same ISO, fastest speed, dust cover on) frames were good (I think)... Thanks, Rob
  22. Hi All, I'm new to the forum, although thanks retrospectively for all the useful advice in various topics! I recently upgraded my 6" dob to an 8" Skywatcher 200PDS with EQ5 mount in order to try and get in to astrophotography (but still have a reasonable set-up for visual). Have been really happy with the new kit, on the rare occasions we've had clear skies this past month! Visually it's a huge step up from the 6" & I've been really pleased with the photos I've taken, compared to the ~2sec exposures I could manage with the dob (OK, nothing compared with all those professionals out there, but as a beginner my expectations were very low!!!). I should mention here that I'm currently using a smartphone (P20) through a 24mm 68deg ES eyepiece & using a coma corrector + ES UHC filter. DSLR is next on the list when my bank account has recovered... I've been 'focussing' on M42 as an easy winter target, 30s at ISO800 & have finally (after procuring a Bahtinov mask) got pretty reasonable images. So I decided to try out stacking (DSS) & processing (GIMP). I took about 15 shots + flats, darks & bias frames. Admittedly my flats were poor, I used the white T-Shirt approach, illuminated with my power supply torch, but it's got a severe radial gradient & looks kind of green! Anyway, I stacked the images, initially with the flats, but later without & uploaded the TIFF in to GIMP. I've watched several tutorials, but so far, I'm unable to get anything near as good an image as the original single frame from the camera! :-( Stretching the image brings up all the background noise & I just can't seem to get rid of it. I've subtracting a colour selection approach & also tried the despeckle approach (both from following youtube tutorials), but neither were successful in removing the background noise & end up taking a lot of detail away from the image itself. Sorry, getting to the point, I wonder if there are any experts on GIMP that can highlight a different approach? I'm unsure if the problem is with my RAW data, or with my distinct lack of experience with the processing software! It just seems strange that the original phone image itself is 'reasonable', but can only make it significantly worse with additional processing! Sorry for the long post! Any suggestion or advice is greatly appreciated. Many thanks, Rob
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.