Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

BrendanC

Members
  • Posts

    1,045
  • Joined

  • Last visited

Everything posted by BrendanC

  1. I guess so, I was just surprised that one particular dust cloud seemed so definitely to emanate from M51.
  2. Yep, checked out that image too. I have no idea what it could be. I guess I could go through the subs, say, every half hour or so, and if it's persistent then it isn't clouds. Don't know what the 'background noise' could be? It's a definite structure.
  3. Bortle 4 skies, not very much light pollution. It was shot broadband, which is why I was surprised something would appear in what seems to be O3. I can't really give it another shot elsewhere because my rig is static.
  4. Hi all, I recently imaged M51. On using the range selector tool in PixInsight, I could see that there was a large structure fanning out from the object, mostly in the green channel. I thought it was a problem with calibration or light pollution, but the more I looked and played around with the data, the more I became convinced it was an actual thing. So, I stretched it out a bit more, and here it is - the green cloud going to the top right corner. My question is: what is this? I haven't seen it in any other images. If I've discovered it I'm going to call it Cooper's Cone or something. Any takers? Thanks Brendan
  5. Looks like separate items it is, then. I agree, unless someone wants this exact setup it might take a while to sell, and I don't want too much delay. Thanks!
  6. Hi all, I'm moving back to London soon, and won't have much sky to enjoy. Also, after four and a bit years of enjoying astro, I find the balance between enjoyment and frustration has tipped the wrong way, especially with our diabolical weather. So, I'm thinking of selling my stuff, but I'm not sure what the best thing would be to do. I can see that it makes sense to break the system down and sell it in bits. It's nothing special, just a 130PDS scope with focuser and guidescope, NEQ6 mount, ASI533MC camera with some filters, Nevada power supply. However, I can also see that as it's all working nicely, it's essentially an 'observatory in a box', so someone could just take it in toto, download drivers to their computer, plug it in and off they go. If they use APT, I could even supply them with the settings. The 'break it down' approach would mean a bit of faffing to dismantle the focuser, put the knobs back on the scope, advertise etc, but would probably be the quickest route. The 'observatory in a box' approach would give me a nice warm feeling inside because I'd be helping someone get set up quickly, but I can see that if someone is just starting out, and wants to get into this game, adopting a complete system in one go rather than starting simple and building up might be a bit of a challenge. Also, I haven't really seen anyone else do this, which implies to me it's not 'done'. So, what would you do? Any ideas/thoughts/opinions? Thanks Brendan
  7. Hi all, While the weather in the UK continues to stink, I've decided to hone my PI processing skills (or lack thereof). I never really got the hang of adding Ha from a OSC dualband image for nebulosity, to an OSC RGB image, especially with galaxies, and especially with M101. What I tend to find is: When extracting the stars from the Ha image (using StarXTerminator now, but even back when I used StarNet), it pulls out some of the nebulosity too, because it recognises them as stars, and I can't figure out how to get it back, or stop this from happening in the first place. When combining the Ha with the RGB, I get a red cast from the Ha across the entire image. So I'm looking for ways to extract just the stars from the Ha image, while preserving all of the nebulosity (or, how to put it back in); and how best to combine the Ha starless with the RGB starless (and then I apply the RGB stars via PixelMath, which isn't a problem). I've tried several methods, from extracting the channels and recombining the Ha as red, also using an HaRGB script I came across on the Visible Dark YT channel, also various masking techniques - again, including one from Visible Dark where you use the Ha image as a mask - but nothing quite seems right. I seem to get too much Ha, so it's all a wash of red or purple and loses that lovely blue hue; or not enough, where it's almost as if there's no pronounced Ha nebulosity in the image at all. Does anyone know of a good workflow that will help with this? Or, ideally, a nice video that walks me through it? Thanks Brendan
  8. Interesting. Thank you both again for your help. I'll try the flats test and I'll give the 1.25 inch filter another go but at the end of the day, since I had no problems with my 2 inch filter, if that just works then I'll go back to that, keep calm, and carry on.
  9. I've just been going through my old flats and they all have it too. I recently installed the Backyard Universe secondary spider, and before and after that, I can still see ghost views of the central obstruction. I guess that no matter what is being captured in a flat, it should be calibrating out what's in a light. Which leads me back to your test for things shifting when taking flats.
  10. Sure, here you go - really stretched master flat (actually taken from the GHS preview cos I think it displays in fewer bits and shows off the artefacts better). I can see what you're saying (literally) - the two circles do look somewhat like what you'd expect to see when looking down the OTA, complete with a bit of secondary spider for good measure. and therefore could be reflections.
  11. Thank you for the input Olly. I've tried reading this several times and I still don't quite understand it: "Could the bright flats-equivalent of the dark patches on the calibrated image be reflections created by some part of the light path illuminated by the panel?" I'm wondering whether it's reflections too. This is from a 130PDS Newtonian.
  12. Absolute superstar - thank you so much for taking the time to do this, as ever I really appreciate it. As you say, the difference in focus positions shouldn't be drastic, as focus changes throughout the night. I refocus every hour. It's a strange one. Here's a super stretch of another cluster I took the same night. I think I can just about make out similar artefacts in this one, but they're very indistinct. So clearly something happened between that one, and this one. I'm wondering if it's a combination of factors: The M67 data was collected at the very end of astro night (and in fact going about 15 minutes beyond), and at that time, it was pointing at a house nearby. I think they might have had a light on all night, without curtains, so the scope was pointing almost directly at that light. This could have affected things in some way, with lots of light bouncing around. There were certainly strong gradients in the subs, as you probably noticed. I recently removed the flocking from my scope because it was coming away. So, if there was lots of light bouncing around in the OTA, the lack of flocking might have exacerbated it. There may have been dew. It wasn't forecast, but when I brought the scope in later that day, the primary was dewy, and I had to discard almost half of the subs but I assumed that was because of cloud, not dew. So, the subs may not have matched the flats because of this. So, I wonder whether these, combined, have just created a unique set of circumstances. Thing is, wouldn't you know it, they happened on the same night I was testing a new filter. Anyway I will definitely do your test. When I do the pixel math, will it really just be a simple expression of zenith flat / horizontal flat? And it should yield a flat image with no details at all? Thanks again, you've already been very helpful.
  13. Interesting test, I'll give that a go when I get the chance.
  14. It's a one-shot camera so just the one set of flats, taken before the shoot, using the Lacerta box.
  15. Hi all, So, another day, another problem. I've been trying out a 1.25 inch UV/IR filter, to see whether I can use smaller filters with my ASI533MC Pro and 130PDS with 0.9x coma corrector. Initially I had it attached incorrectly, but now it is most definitely attached OK, using an adapter, right up against the sensor. It should be fine, but I'm experiencing calibration errors with my flats. This is what I'm getting - pronounced dust motes top left and right: I'm using flats, darks, and flat darks, all taken with the same setup, all at the same temperature, gain and offset. I don't use bias frames, because they make no difference, and I've tested bias with this data and I get the same result. The flats were taken with a Lacerta flat field light, using APT's flats wizard to get to 30K. I've not had a problem like this with my 2 inch filter, and I could just go back to it, but that's kind of academic: there's clearly an issue here, and I want to understand it before I progress. It's probably been in other images too but not visible until I shoot a largely dark star field. Going down to 1.25 inches really shouldn't be a problem with this sensor. The most likely candidate is that something is moving, but I don't know what. My secondary mirror and focuser seem solid. The primary is held by clips that don't actually touch the mirror, but tilt is fine as is collimation, so I don't suspect mirror slop either. Or is this over-correction? If so, how does that happen, and what's the fix? Or is it under-correction? Etc. My head is spinning, frankly. After nearly four years of doing this - two with a DSLR and perfect calibration every time, a few months with an ASI1600 that I really didn't get along with, and now with this camera - I really should be moving along nicely but keep coming up against weird, random issues. Rather than make suggestions - because, however helpful, they do tend to stack up and leave me no closer to solving the problem - would anyone be willing please actually to look at my data and have a stab at what could be going wrong? If so, it's here (1.23GB download, includes master dark, flat dark and optional bias)... Google Drive: https://drive.google.com/drive/folders/1jaZB2V8LRoRhs5Roz8zLrGhYD9pZTXrb?usp=drive_link OneDrive: https://1drv.ms/f/s!AqovBuVZMwj3mqlsJBNWnsGxgkBghA?e=zqcXl8 PLEASE only make suggestions if you're pretty confident about the fix. I've had a ton of problems recently and I'm starting to lose my mojo. Thanks Brendan
  16. Well so it does! Never spotted that. And I just took another look at mine, and mine says so too. In which case, why the messages about no training configuration found, or the model not being compiled, and why is the resulting output different when driven by the script? Curiouser and curiouser.
  17. I'm correct? Wow. First time for everything. Thanks, I think I'll just use GraXpert manually for the time being. I've dipped my toe in Discord in the past and it's just noise. Thanks anyway.
  18. Hi all, My latest obsession - trying to figure out exactly what's going on when I use the new method of running GraXpert from within PixInsight. Quick recap: GraXpert is a very nice gradient removal package that has, until recently, been a standalone product, not running within PI. So, to use it, you had to save your image, exit PI, open GraXpert, process it, save it, go back into PI, open it. Recently, Jürgen Terpe developed quite a neat way of running it within PI, as part of his Toolbox script set. It's not a native PI module or anything, just a script that calls the GraXpert executable and passes information to it and back into PI (I think). If you've used it, you'll know that it does seem to work - load up your image, run the script, it hops off over to GraXpert, comes back with a gradient-free image. All good. I thought so too, but on looking more closely, I can't quite figure out exactly what it's doing, and this bothers me a little. So, this is what I get when I do a very rough import of my Ghost Nebula linear image into GraXpert, by manually opening GraXpert, loading the linear image, processing it using the RBF method, and saving the stretched version (it seems to show what's going on better than stretching within PI): Nice. This is what I get when I do the same thing, but using the newer AI method. As you would expect, it's slightly different (it's not immediately apparent - save them and flick between them using Photoshop or some such thing and you'll see the slight differences). Not necessarily better, in this case, although I've tried it on emission nebulae and it does seem to preserve more of the nebulosity: Nice. Again. So, the new third option: running GraXpert from within PI. All was well and good until I noticed this in the process console: 2023-12-13 17:11:25,531 MainProcess absl INFO Fingerprint not found. 2023-12-13 17:11:25,713 MainProcess tensorflow WARNING No training configuration found in save file, so the model was *not* compiled. statechange: 0 finished:0 / 0 Compile it manually.Saved model loading will continue.1.0.1. you can change this by providing the argument '-ai_version' "Bit odd, that," I thought. I looked around for other people who have reviewed this, and noticed that Cuiv the Lazy Geek shows his process console while demoing it, and it appears on his console too (see https://youtu.be/KBR2xsZ-NmM, but blink and you'll miss it - screenshot below with relevant part of process console highlighted): It looks to me like it's failing to find an AI model. So, is it defaulting to RBF? Or actually using AI? Let's take a look : in this version, I've use the new script in PI to call GraXpert and remove the gradient, but then saved it as a linear image in PI, then opened it in GraXpert, but not processed it, just stretched and saved, which means I get the same stretch as the other two images: It doesn't seem to match either the RBF or the AI versions. As with the RBF vs AI versions, the difference is subtle, and you have to flick between them to see this. You could argue it's so subtle as not to matter, but I do prefer to know what's going on with these things. I'd hate to find out, a year or so down the line, that I've been processing my images not to the best of my ability, simply because I didn't ask. So, I'm not sure what it's doing, exactly. Is it running RBF, or AI, or something else? Does it default to the latest model used when run outside of PI? If so, why is the image generated by using the script to call GraXpert, different from both of the images created using GraXpert directly? Can I control which version it's running, because I'd like to force it to use the AI version via the '-ai version' argument, but I don't know where to specify that? Or, should I just get a life? Any takers? Thanks Brendan
  19. Yes, I'd have thought the flats would still work even if the filter was too far away. Fingers crossed it all just works next time.
  20. OK, got help on this one - turns out I just assumed I could use the 2-to-1.25" adapter and then another adapter I had lying around, to screw the filter into the end of the coma corrector, exactly the same way I use the 2" filter. Wrong. I should have unscrewed the extension tube, screwed the filter in there, in the 2-1.25" adapter, right up close to the sensor, then replaced the extension tubes. Silly me. As they say 'assume makes an ass of u and me'. So, hopefully this will fix the problem next time.
  21. Hi all, Next problem... I recently looked at the possibility of using a 1.25" filter with my ASI533MC and 130PDS, reasoning being that if it worked (which it should, given such a small sensor), I could upgrade to 7nm duoband for less cash. So, I got me a cheap little Svbony UV/IR 1.25" filter, and tested it last night, just an hour before the clouds came in, on M2. At first the results looked promising but then I noticed a faint circular outline on the stack, sort of to the right and down from central. On super-stretching it, there it is, but everything else looks central to me. Wondering if it was a flats issue, I stacked without flats, and got this - looks like the flats are working and again, everything looks sort of reasonably centered. Then I super-stretched the master flat, and got this. It looks to me like my camera isn't quite centrally aligned over the tube, so I can see the edge of the filter and that's what's causing this issue. I can't quite believe this because I've collimated the 130PDS a bazillion times. I don't know how more accurate I could be. It looks absolutely fine through the Cheshire. So, is this my problem? If the fix is 'go away and collimate it again' then I'm just going to go back to 2 inch filters and be damned. Thanks Brendan
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.