Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

StuartT

Members
  • Posts

    1,082
  • Joined

  • Last visited

Posts posted by StuartT

  1. 18 minutes ago, Ouroboros said:

    I’ve been watching Adam Block’s videos and then discovered this recently published YouTube video.  To my as yet inexperienced eye it seems quite a good summary of how to use the script. 

    Great! Thanks. I have been looking for a video on 2.5.1 but didn't see one. 

    EDIT: ok so this is a nice overview of WBPP, but it doesn't tell you about the many new features.

    • Like 1
  2. If I'm honest, I am struggling with 2.5.1 - it seems to have a lot of new stuff which is causing problems for me. 

    Presumably there must be a way I can revert to the previous version of WBPP? It's all kept in C:\Program Files\PixInsight\src\scripts\WeightedBatchPreprocessing so if I can find a download of the files that made up the previous version somewhere, I could just replace all the files right?

    • Like 1
  3. 3 hours ago, wimvb said:

    If you're not sure where to set samples in DBE, download a wider view image with the dust well defined, from the internet, and star align that to your image. This is your reference. Then set sample points in the reference image. Save the instance of DBE to the workspace. Then apply to your image.

    now this is an excellent idea!

  4. Thanks everyone. I was thinking of switching out the L Extreme for the L Pro (which lets a lot more through), but the consensus would seem to be for no filter at all.

    In which case, have you any tips as to how to process out the inevitable skyglow? (I'm not used to shooting naked!) Presumably it's just a matter of good background extraction in PI? 

  5. On 01/09/2022 at 15:24, wimvb said:

    I also live in a Bortle 5 area (20.5 sky Magnitude), and don’t use any LP filter. But I do need to collect a lot of data to bring out faint nebulosity. Plan to collect at least 10 hrs of data. One other trick you can use is to extract the luminance channel of your colour image, and process that separately. Then combine as LRGB. This can keep the colour mottle under control.

    Many thanks for this advice. I shall try this

  6. 27 minutes ago, wimvb said:

    If you can get a convincing combination of the data, this can work. But as long as the light pollution isn’t too overwhelming, you can also combine two versions of the same master with different stretch. I’ve done this with good result in an image of the Orion Nebula. However you choose to do it, you need a lot of data to bring out the faint dust.

    Good point about two different stretches. I wouldn't have thought of that! I use GenHypStretch in PI which provides a lot of fine control.

    My LP isn't too bad. I am Bortle 5 ish

  7. 31 minutes ago, Elp said:

    A good image for a short length, you've got a lot of inner detail. I did over 20 hours on this target and didn't get much further than the initial few hours. A lot is to do with processing, esp with something like PI which I don't use, I've seen amazing images of this target with similar duration to yours as a result.

    This is worth knowing. I think I will persevere and see what a few more nights can add. BTW - I do everything in PI (calibration, cosmetic correction, integration, post processing). It's a truly amazing piece of software. 

    • Like 1
  8. 36 minutes ago, wimvb said:

    This nebula has a very large dynamic range, so you need lots of data, especially if you block part of the light. I’d suggest you skip the L-extreme here.

    thanks. Might it even be worth collecting two different exposure times? One for the core and one for the outer parts?

  9. I'm a little underwhelmed by last night's work. This is 4 hours on the Iris Nebula with an Esprit 150, ASI2600MC and L Extreme filter. It took a lot of pulling and pushing to get even this. I guess it's just a very faint target and will need substantially more time. Also, I am not sure I should have used the L Extreme (since this is not an emission nebula). Maybe it's more of a broadband target?

    300s subs

    NGC7023 night 1 result.jpg

    • Like 8
  10. Thank you both!

    1 hour ago, The Admiral said:

    Very nice Stuart. Your camera and scope must be well matched, as you seem to get sharp and detailed images. Incidentally, do you use a reducer with the scope?

    Ian

    The camera (ASI2600MC) plays nicely with the Esprit for sure. I often use the dedicated 0.77 reducer, but not on this occasion, so it's at native f/7

     

    1 hour ago, Mandy D said:

    Absolutely beautiful! 4.5 hours very well spent on a truly gorgeous nebula. It looks very much 3D in the central regions.

    I'm going to try and get some more tonight. Yes, the slightly 3D effect is thanks to the rather wonderful Generalised Hyperbolic Stretch in Pixinsight (my new favourite scipt) and a little Local Histo EQ

    • Like 1
  11. I decided my Edge 9.25 HD was the wrong scope for the Pacman. Too zoomed in and a poor image scale (0.33"/px). Also, I can't attach a filter to the Edge and I wanted to use my L Extreme (which I love!). So I swapped it out for my Esprit 150 and added the L Extreme.

    Here is 4.25 hours from last night.

    NGC281 Hubble process final s.jpg

    • Like 6
  12. 6 minutes ago, vlaiv said:

    You need to have median background higher than certain number that you need to calculate - based on your bias mean value and read noise.

    If you don't already know how to do it - then don't worry now, just capture as you normally would.

    For next time when planning a session - you'll examine subs from tonight's session and based on that you can calculate if you need longer exposure or not.

    There are explanations on how to do it - just search for optimum sub exposure length and you'll find plenty of info.

     

    I just tried the Optimum Exposure tool in NINA and it suggested 37s - this seems rather short

    image.thumb.png.4c2c1ae8be017f174f5e592a70217982.png

  13. 2 minutes ago, vlaiv said:

    You are using ASI2600MC, right?

    That is CMOS camera, and for CMOS camera - it does not matter if you bin at capture time or later in software. In fact - it is better to do it in software as you have more flexibility in the way you do it (for example - first split debayer and then stack + bin).

    You will get good final resolution for your conditions tonight (and improved SNR) - if you do it like described above.

    Just capture at bin 1 and you will handle everything later. Only thing to worry about is to swamp the read noise with background signal. That is it for capture.

    yes. 2600MC

    How can I know if I have swamped the read noise with background?

     

  14. On 05/08/2022 at 22:19, vlaiv said:

    It is special type of debayering that works a bit like super pixel mode (thus keeps lower sampling rate of bayer matrix instead of doing interpolation) - that splits channels in the process.

    It produces 1 red, 1 blue and 2 green subs from each of the raws (do that after calibration), but each x2 smaller in width and height.

    You end up with red, blue and green subs after that (green subs will be 2 times as many as subs you recorded). Then you stack them like you would normally do with mono/(L)RGB - you stack each of colors to separate stack - but you align them to same reference frame.

    After you finish stacking - bin x2 each of the subs while still linear.

    First part - split debayer will produce 0.66"/px from 0.33"/px data, and second x2 bin will turn that 0.66"/px data into 1.33"/px data.

    By the way - operation in PI that does split debayering is called SplitCFA - here is forum post about it:

    https://pixinsight.com/forum/index.php?threads/splitcfa-mergecfa.14494/

    @vlaiv Can I just double check this with you? - my guiding is only at about 0.4"- 0.5" this evening. Do you still think it's ok to bin 1x1 (0.33"/px)? Shouldn't guiding be equal or better than image scale?

  15. 20 hours ago, ollypenrice said:

    I don't do false colour imaging but colour weighting is what it is in any scenario.  Firstly I'd follow Vlaiv's suggestions out of pure common sense. When you want to reduce the impact of the Ha you have two simple options, bringing in the black point and adjusting the stretch. You can do both in Levels. I'd begin by looking at the colour balance in the background sky. If you want a neutral dark grey you need parity between channels. I'd probably use black point adjustment to get there.

    Next you want to balance the colours above the background sky level, so I'd use Curves.  You could pin and fix the background at a chosen level then lower the Ha curve above that. In a civilized program like Ps you can do that while looking at the three-channel image in real time as you adjust the stretch in one channel only.

    But..  if you colour map as per Hubble and shoot an Ha-dominated image it's going to come out very green, surely? Why not try HOO instead?

    Or...  do a nicely colour balanced image in which the Ha is greatly under-used and save that as a colour layer. Then use the Ha as luminance to restore the Ha interesting parts.

    Olly

    thanks. This was very helpful. I carefully stretched each channel first so as to ensure the H alpha wasn't overpowering. Then when they were combined I got a much more pleasing result. As you suggested, I then added (a more fully stretched) H alpha channel as luminance to add the interesting bits back in. 

    I have learned a lot in the process. Much obliged Olly

    NGC6357 final process.jpg

    • Like 3
  16. 56 minutes ago, Anthonyexmouth said:

    if you haven't got it, grab the demo of APT and see if that will connect and track. 

    Thanks. 

    I think I have this fixed now. Somehow when I reconnected NINA it seemed happy to play nicely with EQMOD and track normally. I did some solar (setting to solar tracking) and the sun stayed put.

    So hopefully tonight, it will do the same when I do some galaxy-ing

    Sometimes things just seem to fix themselves and I never know what I did to bring it about! 

  17. 2 minutes ago, tomato said:

    Clutch tight on the RA drive?

    yep

    1 minute ago, newbie alert said:

    I've no idea how eqmod works, but ascom tends to stick the boot in every now and then..

    I'd bring up the ascom diagnosis, and just make sure the mount is connected in ascom.. then connect as usual in Nina... Best thing is you can do this during the day and not have to wait for nightfall

    Ok. I've never used ASCOM Diagnostics, but I'll give that a go.

  18. 10 minutes ago, newbie alert said:

    Take it your using eqmod? Was it unparked in both softwares, not sure which takes priority...can you hear the hum from the steppers? Like it is tracking?

    Certainly could be pa, certainly drifting in that direction 

    yes. EQMOD is being controlled by NINA. I've used this setup for about a year, so I know it works.

    My PA must have been a long way out to show trails though. Although I am at FL-2350mm so I guess it's a possibility.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.