Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

The Lazy Astronomer

Members
  • Posts

    952
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by The Lazy Astronomer

  1. Ah, yes, wipe will do that if there are stacking artifacts around the edges or what startools refers to as "dark anomalies" in the image (e.g. dead pixels, or small dark dust shadows). From the image you've posted above, it doesn't look like there's any dust shadows, so l would suggest there's probably some stacking artifacts messing with wipe's algorithm (it really, really hates them!!). Try cropping the edges of the frame first. Feel free to upload your stacked fits file and I'll have a play around with it if you want? 2 minute exposures are probably fine. It's probably longer than you need to do, but you won't be overexposing.
  2. Did you run it through the wipe module first? There's no real way to completely escape light pollution during capture (except maybe going to out into the middle of nowhere!) but wipe, or other similar background extraction routines in other software, should help to remove/reduce the effects of it. Another thing to consider is your exposure length. What's the setup you used here and where on the bortle scale are you?
  3. Looks like inverse vignetting to me. Caused by flats over correcting - how did you take the flats?
  4. Lovely detail in the dust lanes. I also note you've got a blueish hue in m110 - l got a similar colour on my recent first attempt at m31, so, nice to see our results agree there at least (some images have it with this blueish tone, others, a more yellowish one). One way around it if you have an autofocuser and have set focus offsets for each filter is to shoot them in a loop, rather than in blocks. If I want to try and ensure I get all of the colour channels on a target, I do a sort of hybrid: I'll shoot 5 subs each of RGB, and 15 luminance and repeat until enough data gathered. Big advantage of modern CMOS cameras is long sub exposures are not needed, so you can complete one loop quite quickly.
  5. If your mount is the newer version which has the usb port on the mount head (see image) then you don't need any special cables, any regular usb cable will work Once connected to a PC, next thing to do is go into device manager (assuming you're on Windows) and check which COM port the mount is mapped to (note this will likely change if you plug into a different usb port on the PC, so try to remember which usb port you use, and use the same one each time) Then type eqmod in the search bar (again, I'm assuming Windows 10), and open the eqmod toolbox The window on the left will appear. Click driver setup and then the window on the right will appear. Under eqmod port details, set the correct com port as identified in the device manager and set the baud rate to 115200. This should now allow the PC to talk to the mount and you should be able to manually slew using the arrow keys in eqmod.
  6. 120 is probably your best bet as it's where the high conversion gain kicks in (read noise drops significantly, and dynamic range increases back to the same level as gain 0). 120 also happens to be unity gain on this camera (where 1 electron = 1adu). I have seen some people using high gain values with narrowband filters (e.g. 200 - 250) with pleasing results, but I've not seen any direct comparisons of high gain vs 120, so it may not make any appreciable difference. I have the mono version, and personally, I stick to 120 for now. Edit: I've just checked the specs, and unity gain is actually 117, so at 120 it's about 0.9ish electrons per adu, but, close enough! 😉
  7. Ditto what's already been said. Leave the imaging train in tact, and you can reuse flats for a little while. Eventually you'll get some new dust motes or they'll move a bit and you'll need to reshoot the flats, but fortunately that's a job for the daytime/cloudy night.
  8. Thanks alacant. I follow all of the recommendations to get the data ready for st, but I find this tends to happen when the data is perhaps pushed further than it really allows, in combination with lazy/poor use of the star reduction and superstructure modules. It's definitely user error - I'm going to reprocess when I get the time to grab some more colour data (and redo the red channel completely, as it has not calibrated well).
  9. Images taken back in July or August sometime, finally got a process of it I'm happy with (3rd attempt at it). Approx. 4h45m total integration time, in the ratio of 3:1:1:1 Ha:RGB. I feel I may have murdered the stars a bit too much with star reduction, and my technique still leaves something to be desired, but I'm happy enough with this version for now. As always, comments and critisms welcome!
  10. Can always rely on you to spot all the issues! 😉 The background is one of things I was not very pleased with. I used Startools and l think it has a tendency to cause this type of background when the data is pushed too far. I probably should, as you say, be a bit more conservative with the processing. There was also indeed an issue with the red flats (and to certain extent, lum as well). I need to do a little investigating to find out why they didn't quite work. Unfortunately I've since disturbed the imaging train, but as it's only 40 mins of data, I think I might just throw it all away and redo the red channel entirely.
  11. Not necessarily that pleased with this one (neither capture nor processing), but as it's my first Andromeda, I thought I'd post anyway and hopefully will be able to show improvement in subsequent attempts. A bit over 7 hours integration, with the vast majority in L and only around 40 mins each R, G and B. Hopefully will be able to shoot some more frames over the coming months before Orion starts to take my attention away. As always, comments and critisms welcome.
  12. Hi and welcome to SGL! Pre warning here: you are about to go down an endless rabbit hole with astrophotography. Next warning: no one telescope will do planets and deep sky objects well. I'm going on the assumption for now that you're mainly interested in DSOs, so what l say below may not be relevant if you're more interested in the moon and planets. As you already have a camera, rather than a telescope, I might suggest you get yourself a camera tracking mount, such as the Skywatcher star adventurer or the ioptron skyguider pro and take some photos using camera lenses (you can often pick up half decent used ones for not very much money). Personally, l would say the setup you mention is not particularly suited to photography - it's more of visual setup. People often think you need a large telescope to get pictures of objects in space, but a lot of the typical 'beginner objects' are quite large (the Andromeda galaxy, for example, is several times the size of the full moon). Have a search around on the forums and you'll see loads of examples of what can be done with even basic lenses.
  13. I'm going to buck the trend a little here, because I do polar align more accurately, usually to within 10 - 30" total error as reported by Sharpcap. I always set up in the same place in my garden, and find it only takes a couple of minutes to get the polar alignment down to that level. I've run the guiding assistant a couple of times and that reports a PA error of about half an arc min, so pretty good agreement with Sharpcap. My guide graph also shows barely any dec corrections throughout imaging runs. The way l see it, anything l can do to make the mount's job easier is for the better.
  14. Oh, how we suffer for our art! (science)
  15. Never an actual injury, but boy, does an EQ mount swing around fast when you accidentally unlock the clutch with no counterweights!! I've since discovered I'm just about able to carry my full setup (eq6r, esprit 100, and associated imaging gubbins) in and out without having to disassemble, but am slightly terrified that one day in the winter I'll slip on some ice while bringing it in at 2am only to be found much later on in the morning by my partner, frozen like Jack Nicholson in The Shining...
  16. @vlaiv can tell you what the 'wrong' colours are! 😁 Honestly though, I believe him. If someone was to write a program which was able to tell me the correct colour balance for a broadband image, to accurately represent a given DSO, I would happily throw fistfulls of dinars at them (hint, hint 😉).
  17. When I first started with DSO's (about a year ago, so basically still am 'getting started'), I really didn't enjoy processing at all - I found capture much more fun - but I find myself really starting to enjoy it now. I think downloading some of the IKI data helped; nothing like some really good quality data to practice and try new things in processing! That said though, I still hate colour balancing (in broadband). I never seem to get it right the first 2 or 3 times, so getting an image out there takes me a while. Case in point: I have an Andromeda image from about a month ago I'm still working on because l just can't get the colours right! Edit: thinking about it, I also have a Pelican nebula from about July (I think) which I've sort of given up on because it just keeps coming out too purple.
  18. I quite like it, it's very striking and dramatic. And, let's be honest, with narrowband imaging, you can get away with being a bit more 'artistic' when it comes to colours. 😁
  19. I quite like the spikes, has a subtle hubble-ness to it 😁
  20. Wow that bowshock is really showing up nicely. Coincidentally, I'm also shooting this target currently (about 8hrs so far) and with quite similar equipment (same camera, slightly shorter focal length, and 6nm filters). If l can get something even 1/10th as good as this, I'll be well pleased! Question though: I note you've used gain 200 and offset 10 for the capture, do you find you get better results in NB with these settings over the default 120 gain, offset 30?
  21. Depends how many dust motes you're contending with - if you've got none, then you could probably use the previous flats with no problem. If you've got quite a few, the potentially you'll be introducing more artifacts unless you were very lucky and managed to precisely line the camera up exactly as it was before you removed it. If you find the flats are making things worse, you might be able to model a synthetic flat in post processing (try a Google/Youtube search for some how to's with whatever software you use).
  22. Afraid not. To reuse flats, the entire imaging train must be undisturbed between nights.
  23. Unless you have a particularly dusty environment, or break down the imaging train after each session, you should be able to reuse flats multiple times. I hate the faff of taking flats (and associated flat darks) so I usually like to get a couple of months use out of them. Your original question has already been answered, but just to add to it: astro stacking software analyses images to determine patterns in the stars and uses that information in order to align them to a given reference frame. Individual lights can be in literally any orientation and as long as the software can find enough matching star patterns it will align them all to the reference as part of the stacking process. It's weird how it's something we don't really think about, but it is extremely clever!
  24. Thermal noise is so insignificant compared to read noise and (most significantly) shot noise, that it will make virtually no difference to the overall noise level of the image. What's most important is to ensure the same temperature is used for lights and darks, to enable proper dark calibration. The advantage of a cooled camera is - presuming you always use the same temperature - you can reuse your darks ad infinitum. If you haven't already, I highly recommend watching this video of a talk by Dr Robin Glover:
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.