Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

The Lazy Astronomer

Members
  • Posts

    952
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by The Lazy Astronomer

  1. TL;DR - the L-extreme is designed for emission nebulae and would be of little use for galaxies, the exception being to add some Ha data to a broadband capture. The longer answer: the L-extreme is a dual narrowband filter with 7nm bandpasses around the Ha and Oiii emission lines (Oiii actually has a couple, but I believe the L-extreme focuses on the 500.7nm emission). The Ha and Oiii emission lines are caused by ionised gases which emit light at specific wavelengths. The L-extreme blocks out ~95% of visible light to isolate just a small section around those 2 emission lines, thus significantly increasing the contrast when photographing these regions. Galaxies, on the other hand, emit light across the whole of the of visible light spectrum, and so are best photographed without the L-extreme. That said, a good number of galaxies do have some quite strong Ha regions, so it can be of benefit to add some narrowband data into a broadband capture to accentuate these areas (most would do this by adding the Ha data to the red channel when processing). To answer your other question, you can choose to either create a full false-colour narrowband image, or to blend the narrowband data into a true-colour broadband image. When doing this, the images would be stacked separately and the narrowband blended in during processing. As above, Ha would be added to the red channel, and Oiii would be added to green and blue.
  2. I only use Siril for stacking, and I've never actually tried to process an image with it, so can't advise there, but have a look at the tutorials on their website to get you started. This one runs through a full processing workflow, including background subtraction/neutralisation and colour calibration: https://siril.org/tutorials/tuto-scripts/#lets-start-processing-the-result-a-nametuto-3a
  3. I wouldn't say they leave GIMP in the dust, excellent results can be achieved by someone experienced the use of their chosen image editing software. It is difficult when you're first starting out, because it's very likely you're not getting the best out of your image - practice makes perfect! I would say though, that a software like Siril or Pixinsight (amongst many others) are specifically designed for astro image editing so I think give an easier experience than a more generic photo editing software like GIMP or Photoshop*. However, there are some people on this forum who get fantastic results with Photoshop, so really it goes back to what l said in the first paragraph. *There are a lot of people who would find it hilarious to see the phrase "easier experience" and "Pixinisight" in the same sentence, but I found it nowhere near as difficult to get started with as it's reputation seemed to suggest. It's not easy, but it's not completely unintuitive (to me at least, anyway). Yes, I use NINA exclusively for complete control of my whole setup during capture. To be honest, NINA was the first complete control software l tried and it's worked so perfectly I've never felt the need to look into any of the other options. Plate solving is very straightforward, and much easier than using the mount's hand controller and doing star alignment. A small amount of initial set up in NINA and you're good to go. Once the mount is polar aligned, NINA needs to know where you want to go, and this can be done by using the object database in NINA, or importing from a planetarium software like Stellarium, or manually entering celestial coordinates, or uploading a previous image file (so many options!). NINA will then slew the mount, take a short exposure, plate solve the current location, and make any positional adjustments as needed. It will keep repeating these steps automatically until the pointing is within your specified level of accuracy. Magic! For me, it usually takes 2 or 3 rounds of plate solving to get my target spot on - the first one gets it close, the follow ups make fine adjustments to get it perfect. Takes maybe 2 or 3 minutes total normally. Combined with Sharpcap's polar alignment tool, I'm usually imaging within about 10 - 15 minutes.
  4. You probably can. It sounds like you've got a fairly high amount of light pollution, so you've got a bit of leeway l reckon, so long as you've got plenty of storage space! I wouldn't say that. Nothing wrong with a 500 - 600mm fl triplet! Assuming a pixel size of around 5um, that gives a sampling rate of around 1.8"/px, which is pretty good I'd say. A simple background extraction might just fix it - consider looking at Siril (free), or, if you're feeling fancy, Pixinsight.
  5. Yeah, I saw the offset of 2 in the FITS header when I was checking the gain you used and thought it was a bit low, so checked the stats. I think what you say is right, but I'm now confused how you can have a minimum stack with a minimum pixel value greater than the minimum value of a single sub 🤔 A value of 1.000 on the 14-bit scale should have read 0.00006 on the normalised scale. No worries, obviously we're all good people on this forum, but you never know who's lurking and I didn't want you to be giving away info that lead to some weirdo turning up at your house! 🤣
  6. Yes! Remembered something correctly!! 😁 Maybe "diminshing returns" wasn't the right phrase, but I think in my head I was thinking along the lines of "if I have, say 4 hours on a target, then shooting an extra couple of hours would help a little bit, but not significantly increase SNR".
  7. Optimum minimum sub exposure times using a modern, low read noise cameras can be as low as around 5s or so in typical suburban sky conditions. Many people (myself included) tend to stick to exposures somewhere in the range of 30 - 120s to avoid building up thousands of frames to have to store and stack. Generally speaking, the longer the total integration time, the better the image in terms of signal to noise ratio (SNR), but it's diminishing returns - I can't remember the exact numbers, but for example, it might be something like 4x the integration time for a 2-fold increase in SNR (that probably isn't right, so someone please correct me!).
  8. Hi and welcome to the forum. To aid better advice, could you share a bit more info about what you're planning/want to do? I.e. do you plan on doing visual only, or would you like to attempt some photography as well? Do you have a particular area of interest, such as lunar, planetary, double stars, deep sky objects? What's your observing site like - would it be at your home or would you have to travel somewhere? There are more questions, I'm sure, but that should get us started!
  9. Yep, that's all "amp glow" - looks exactly the same on darks from my own 294MM What I would say though, is it looks like your offset is too low - the minimum pixel value in your single frame is 0, which should not happen One of mine for reference (same gain, but 3 min exposure instead if your 1 min) Also (and this may just be me being overly cautious about these things), but I'd remove your lat & long values from the FITS header of any FITS file you're making publicly available.
  10. Seconded, I love Bill's pixelmath stuff, would highly recommend.
  11. Thanks very much! The top image might just be my best ever capture (and process) - it seems decidedly sharper than anything else I've managed to capture to date. Not sure how much of that is purely psychological, or maybe a product of the high contrast between the bright pillars and the less bright background, but I hope I can produce more images like that!! I am also of a rare breed who believe in the inclusion of magenta stars in close-up hubble tribute images. That said, looking at them now, they are maybe a bit too intense, so perhaps I'll dial them in a little. I did use the inverse SCNR green on the wider shot, because they were far too overbearing in that one.
  12. It will be, but making the planets appear larger on the sensor does not lead to more detail captured once you hit the limit imposed by atmosphere/optics. As a general rule, I usually go for an f/ratio of 5x pixel size. With 5um pixels, this would be f/25, so a 2.5x barlow would fit the bill nicely. This gives a sampling rate of ~0.2"/px and would make Jupiter about 240px wide.
  13. In a follow up to my WIP image posted a little while ago, this is the final version of the pillars' close-up, plus a fairly quick process of the wider image. There wasn't too much feedback with regard to improvements on the close up; that made me think image was more 'done' than I had initially thought, so I've only done some oh-so-subtle tweaks - namely, to slightly reduce the magenta tones in the blues, then a touch of sharpening, and bit of work with the stars (which I've actually made even more magenta - I think they looked too washed out in the original). Anyway, here it is: Next up, the view of the wider image, processed fairly lazily, and with the intention of showing the extent of the nebulosity of the area - as such, the contrast and colour within the bright central region has suffered somewhat, but I think it accomplishes what I set out to achieve. Quite noisy, so don't look too close!! 🙈 I shot the Ha data last year, and I didn't realise at the time, but I clearly rotated my camera at some point between then and when I went back to shoot the Oiii and Sii earlier this year, so this is the widest field of view afforded by the intersection of all the channels. ZWO 294MM, Esprit 100 and Astronomik 6nm filters Ha = 14 x 5 min @ gain 120 (2021) Oiii = 45 x 3min @ gain 200 Sii = 40 x 3 min @ gain 200 Total integration time = ~5.5hrs All processing in PI As always, let me know your thoughts - feedback and critique welcomed 🙂
  14. This is lovely - I really like the colour, and the brightness and contrast is spot on, in my opinion.
  15. I'm going to hazard a guess that StarX created the stars only image by subtraction in this case, so the stars (and other bright areas it misidentified as stars) that were extracted from within m33 are dimmer than they should be, and where you've rescreened the stars back in, they've been put back in at this dimmer brightness value from the stars only image.
  16. Yeah, starnet just does a simple subtraction of "original - starless" to get the stars only image (as does StarX, although I had heard it had an unscreen option; I don't use StarX though, so don't actually know - it may well just create an unscreened star image by default now). Subtraction works fine when the stars are just sitting in front of background, but the problem with subtraction is that stars that were over brighter areas end up being dimmer and possibly have their colour altered (depending on the colour of the star, and the colour of whatever was behind them). Unscreening gets around this, and extracts the stars at pretty much their full brightness and with minimal impact on their colour (if any), regardless of what was behind them. The 2 stars only images below, taken from the pixinisight forum post I linked to, show the difference between simple subtraction and unscreening. Subtraction: Unscreened: To get the bottom image, you create a duplicate of whatever you're working on, run starnet/StarX, then do ~((~original)/(~starless)) Continue and do whatever you want to your stars only and starless images, then recombine with the pixelmath I posted previously. Edit: I should also mention rescreening the stars back in to the image puts them back at basically their orginal brightness (i.e. before they were extracted). Addition will obviously just add the star brightness to the starless image brightness, which can then cause clipping of the stars if you increased the level of the stretch on the starless image. A multiplier applied to the stars only image during addition recombination can somewhat mitigate this, but it's an imperfect solution, as you've likely brightened or darkened different parts of the image in different ways. By the way, I take absolutely no credit for this, I merely stumbled across the post on the PI forums. P.s. nothing wrong with being lazy! 😉
  17. That formula can work, it depends how the stars only image was created. The post I linked to explains how to unscreen stars using starnet (I think the latest version of starX has an option to create an unscreened star image automatically). The pixelmath ~((~starless)*(~stars)) is used later to add the stars back in
  18. That could well be it, especially if you've seen no issue on previous (fully temperature matched) images. On another note, I recently happened upon some nifty pixelmath for removing and re-adding stars on the pixinsight forums: https://pixinsight.com/forum/index.php?threads/unscreening-and-re-screening-recombining-stars-with-starless-images.18602/ Probably already a well known technique to some, but it helped me, so thought I'd share.
  19. Light leak? (in either the darks or flat darks) - would only have to be very slight to make a difference How old is the master dark? Gain and/or offset inadvertently changed? I vaguely recall something with APT's flat wizard where it did something weird which made flats overcorrect, but I think I've seen you use NINA right? So probably not that. I've not heard of the 2600 being particularly fussy (unlike my 294...😑), but I temperature match all my frames. Might help for future if you don't already (certainly won't hurt in any case).
  20. The joys of image calibration! The first thing I would think it would be is improperly subtracted darks. Did you take darks (for the lights) and flat darks (for the flats)?
  21. Well, I was going to bump the trend and say the top one was the winner for me (I think the bottom one is just a bit too sharp - top seems more natural to me, but both great images nonetheless). However, the blend you've put up there now beats them both I think.
  22. That's nice. Looks very clean for just 3 and bit hours. I think you may have some over correcting flats (inverse vignetting visible in the corners) and I wonder if there's a tad too much green in the blues?
  23. I see you've already made your purchase so this is a largely pointless addition to the conversation 😀 Visible light goes from about 400 - 700nm, so the UVIR cover window only cuts a tiny section at the extreme end of red, and all the major nebulae emission lines (Oiii/Ha/Sii) will pass through to the sensor.
  24. Nice capture - I haven't really seen any images where this object takes centre stage (it seems normally to be the poor relative of it's much more famous nearby nebulae). One thing I would say though, and this is probably a personal thing, but the transition between the blues and the surrounding brownish dust is quite hard (I'm guessing you used some selective colour masking during processing) - it does give some sense of depth, but I think a softer transition would give a more natural appearance. On the other hand, if there was no selective masking done and this object does have these fairly prominant transitions between the gases, then feel free to tell me to shut up and go away! 😁
  25. Easy. Do everything you said using curves, write simple max(image a, image b) pixelmath expression, recoil in horror at the all the artifacts, make further adjustments, write more complex pixelmath expression, recoil in horror again (repeat previous steps as many times as required), then give up and move to a layers program 😁
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.