Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

BrendanC

Members
  • Posts

    1,045
  • Joined

  • Last visited

Everything posted by BrendanC

  1. OK, a bit of hackneyed data processing and maths... Process the above image so it has a dark background, upload it to Google Photos, grab the text from Google Lens, copy it to my PC (without the dark background Google Lens doesn't detect some of the text). Paste the values into Word, then convert to table with 19 columns. Paste that table into Excel and sum all the values. Comes to 345.384. Interestingly, this implies that if I had no vignetting, and all the values were 1, that would come to 361 (ie 19 columns x 19 rows), so my current setup is 345.384/361=95.7% efficient given the current vignetting. Paste them again but rejig the table so that the values 'reflect' either side and top to bottom, as if the sensor were dead centre, and sum that. Slightly subjective but roughly right. Comes to 348.218. So, it seems to me that my off-centre sensor is collecting 345.384/348.218=99.2% of the light that it would were the sensor to be absolutely centered. (This could probably also all have been done using averages). I think I'll leave it.
  2. I just discovered this cool feature in ASTAP which shows the proportional differences. But again, I don't know how to interpret them. I guess it sort of means I'm losing a small percent of light that I could, in theory, actually calculate if somehow I could export these figures.
  3. OK so this has uncovered yet another issue. I spent quite some time using the APT flats aid to get my flats to 30K ADU. Yet when I measure this in APT, I get from top left to bottom left, going clockwise: 40015, 36117, 37160, 41142, and in the centre, I get 43347. So I'm probably going to have to revisit this too, although (again) calibration seems to work OK. But, ignoring that for now, I wouldn't even really know whether these are acceptable figures. All I can say is that the difference between the highest and lowest values is about 17%.
  4. Thanks. I think I will leave it for all the reasons you give. But I just know it'll continue to bug me! Out of interest, which of the two options in the graphic do you think are applicable? I always thought the mirror should be central, which yields the offset pattern. But, if the mirror needs to be offset, what happens them? Or, if this is really a very small adjustment, then perhaps the mirror will still appear central?
  5. TL:DR - my sensor isn't central, is it the secondary screw, if so how does that impact collimation and do I even need to fix it? Right, this has been bugging me for a while and I've decided to at least try and understand it, if not fix it. My 130PDS is now perfectly collimated (ideal pattern from tribahtinov, Cheshire, and own method using mobile phone and Al's Collimation Aid), and ASTAP is showing that I have virtually no tilt. However, my flats look like this, from a Lacerta panel. I think that, when I look carefully at my subs, they do too. Clearly, the sensor isn't central. It's probably always been like this but it's now bothering me. I wondered whether the fix could be to use the central screw to move the secondary up/down the tube, and I've had responses elsewhere from fairly knowledgeable people who think so too. However, when I think about this some more, I get confused. My understanding is that, according to pretty much every collimation guide I've read, the secondary mirror should be centrally aligned relative to the focuser tube, and that for a fast (above F5) scope, this produces the offset collimation pattern when viewed through a collimation cap/Cheshire. So, central mirror produces offset pattern. However, if I'm going to be moving the secondary up/down the tube, isn't that going to affect this? In other words, it implies that the secondary should be physically offset, which implies I would then see a central collimation pattern. So, offset mirror produces central pattern? If neither of these, then what should I be expecting to see through the Cheshire if I physically offset the mirror? Or, is this such a small adjustment, given the size of the camera's sensor, that the secondary will still appear central? I've created a quick diagram illustrating what I see as the options. Which one applies, do you think? Thing is, flats seem to calibrate it out, so is it even worth looking into? I'd like to fix it, but not at the expense of a ton of bother. Any takers? Can anyone offer a fairly confident diagnosis of the problem, and a fix, and explain how it impacts collimation? Or should I leave well alone if it calibrates out? Thanks Brendan
  6. OK, I actually followed that and actually understood it, so I might be getting somewhere. Thanks. I've got @vlaiv's method captured in a spreadsheet but will also look at shooting to 2224 ADU to achieve swamp factor of 5, and see how I get on.
  7. Brilliant, thanks! I just realised I do have 60s darks (should have been obvious) which are at the same gain, offset, temp etc, and they're also at 1997 so that part's still correct. Now I'm going to dig around for the e/ADU figure from Sharpcap and plug that in. Again, thanks for laying this all out for me. I tend to learn by seeing examples and then reverse engineering them, so this is ideal.
  8. This is incredibly useful, now I think I'm starting to get somewhere, thank you. I've built this into a rough spreadsheet, attached, with notes. The ADU for the bias is estimated - I don't have a value for a 60s bias to hand, but I'll try and get one later today. I'm also taking the e/ADU value from the Sharpcap graphic above. I've also got my own results for that somewhere, again I'll look it up when I get the chance. So, with those figures plugged in, I'm getting about 100s. This almost starts to look about right, and gives me a basis for understand this. If you could quickly look at the spreadsheet I'd appreciate it. Thanks again, Brendan vlaiv calc.xlsx
  9. Re the calculator - why would I shoot to a required ADU? How would I establish what that should be? And what is it actually calculating - as in, why specify a sample exposure time, and then it calculates the exposure? @vlaiv - I don't know where to begin with what I find confusing about all this! 'you take a calibrated sub (you should really calibrate your subs for stacking, so there is no reason not to have calibrated value) - you measure median value on patch of background in the image - multiply with a number and compare to reference value you calculated from read noise.' - sounds easy, but what I'm missing is steps, and how I would do this. As in 'step 1, 2, 3', with guidance actually on how to get these values. Then we're talking ADUs, electrons, gain, offset, 14-bit, 16-bit, whatever else can be thrown into the mix. I still think DSLR was easy - in APT, I could just use the histogram and shoot to 1/3-1/2 way across it. Job done. I'm just not that great with maths, and I find it very frustrating that I cannot find a simple, quick, easy-to-follow 'recipe' that will allow me to figure out how long I should shoot for, actually while at the scope. For example, using my figures, you say I'm shooting x19 longer than I should. If I'm shooting 120s BB, and 300s NB, then that implies I should be shooting 6s BB and 15s NB! I'm shooting at F4.5 in a Bortle 4. Which brings me to another point: all the calculations and spreadsheets seem to be about getting the minimum exposure to swamp read noise, with no consideration as to what an optimum, real-life situation would be for exposures. Again, this is not as easy as 'expose until 1/3 to 1/2 way across the histogram', which I used with no problem whatsoever with my DSLR. I did once find this, which I thought would work: 'Simply take a Bias and look at the AVG ADU. Generally you want the AVG ADU of your LIGHT to be 400-600 above your Bias (for monochrome). If you are shooting OSC, then you will want to be ~1200 above the Bias (2-3 times higher because of bayer matrix). For example, if your Bias Avg ADU is 500, then your target Light Avg ADU will be around 1000 for monochrome, or 1700 for osc. Take a 30 second Light image, and adjust your exposure until you reach your Target Avg ADU.' (https://www.cloudynights.com/topic/781537-understanding-adu-reading/?p=11256105).' This implied that I would shoot to an ADU of 3,200, which worked for 300s NB exposures at gain 200, but for gain 101 BB, it came out at 540s! So, nice 'recipe', easy to follow, works in the field - but gives results that are far too long rather than far too short. Another dead end. So, I'll bow out of this thread now. I'm clearly lacking the brainpower to understand this. I'll just stick with 60s or 120s BB at gain 101, and 300s NB at gain 200, without really knowing why, because it seems to work. Thanks, Brendan PS FWIW, I've attached my 'Exposures calculator for dummies' spreadsheet which shows the kind of thing I'd be looking for. Perhaps it's right? Who knows? Exposure calculator for dummies.xlsx
  10. I wasn't using calibrated subs, but what I can say is that my bias ADU is 2,000, so could that help here? So that if my sub's ADU is 5,000, and my bias is 2,000, does that mean I could assume 3,000 for a calibrated sub? The reason I ask is that I want something I can understand and use in the field, ie take a sub, evaluate it (using APT's Pixel Aid if possible), and act on that. Calibrating the sub wouldn't really be part of this process if at all possible. I have literally no idea whether e/ADU is 0.4. I was reading one of the ZWO charts that seemed to be relevant. OK, so this simple method is turning out not to be so simple, as is often the case. What I will never understand is why this was so very simple with a DSLR (expose until 1/3 to 1/2 across the histogram), and so very opaque with a CMOS camera. Since moving to a CMOS camera over a year ago, I have never found a simple, understandable, step-by-step method for determining an optimal exposure time. Like I say, I have a calculator of my own based on Robin Glover's presentation with mysterious C factor values and suchlike, but I don't really understand the logic of what I'm doing, or how to vary things depending on shooting conditions. All I know is that given different gain values I can have different noise values, but it all seems very arbitrary to me. So, thanks anyway, but I think I'll just never understand this.
  11. I've no idea. I just want someone to help me with step 3 onwards! Having said which I totally appreciate I may have hijacked your thread here. I just figured it was an old one and it would be OK to revive it.
  12. Interesting. I was kind of hoping I could run CUDA with an nVidia card to speed up things like BlurXTerminator and suchlike. As you say though, it's WBPP that hurts most.
  13. Nope. Just want a blazingly fast machine that won't take hours to stack images and process them!
  14. Thanks, but I really really really really really want to know how to continue the method above, from step 3 onwards. I also shoot gain 200 with 300s for duoband, gain 101 with 60/120s for broadband... but I want to get a deeper insight as to how and why this works.
  15. I'm after a laptop - I really don't want to have to sit at a workstation. I do a lot of my processing just sitting wherever I want. I know, I know, this means less bang for the buck, but it's how I work. I have to say I'm not absolutely sure any more where I came across that £3K figure but I think it was this as a starting point: https://www.pcspecialist.co.uk/form-view/recoil-VII-17/Recoil-Ultra-17/ Ditto. I could revamp my old machine too, but I want a laptop. That's why it would need to interface directly with OneDrive, where I have all my data stored. I know Azure would do it, but I don't know if Shadow would. They say they work with it, but I'd need direct transfer, not download/upload. That's a very interesting idea actually! Never even thought of that. Problem is, I expect most lease-hire operations just do fairly standard machines whereas I want something blazingly fast. Also, from what I've seen, cloud-based virtual machines absolutely outstrip local processing so there would still be better value going the cloud route.
  16. Thanks, but I've given up on all the online calculators. I just don't understand them. I've even created my own from the Robin Glover presentation... but I don't really understand how or why it works. I just want to know how to continue with the 'simple' method outlined in this thread - and it doesn't surprise me in the slightest that it's not as simple as I thought! I really really really wish it was as simple as with a DSLR: exposure to 1/3 to 1/2 way across the histogram. To answer your questions: Median for a bias is around 2,000 Offset is 50 So, how would I progress from step 3 above to an actual figure for my exposures?
  17. Hi all, I've been considering getting a better machine for running PixInsight. When I start running through the specs and totting up how much it would cost, I'm well north of £3K. This, for a machine that will be average within 2-3 years, and getting long in the tooth after five. So I was looking at the possibility of cloud services ie basically a Windows virtual machine that I can install my software on, plug my OneDrive into, and process much more quickly. The major providers such as Microsoft/Amazon/Google seem very full-on, but I do like the look of Shadow (https://eu.shadow.tech/shop/en-GB). Seems much easier to set up, and even at £30-£45 per month that's going to work out much cheaper than a new machine for at least a few months while trialling. Any thoughts? Thanks Brendan
  18. I know this is an old topic but I'm always on the lookout for interesting new ways to confuse myself about exposure times. So, @vlaiv, I've tried following your method. Assume I'm using a 533MC Pro, at gain 200, and my latest shots have a mean ADU of around 5000. 1. Lookup read noise for particular gain setting you want to use: for gain 200, according to the ZWO charts, seems to be about 1.4e 2. Measure what sort of light pollution you have (measure background ADU with that particular setup you have - from any of the images you've taken with that setup): my most recent shots are around 5000 ADU average (it was a full Moon). 3. Convert to electrons from ADU after you measure and that you measure on linear calibrated data: not sure what I'm doing here, but there's a chart for the 533 that seems to imply that e-ADU at gain 200 is about 0.4. So, does that mean I multiply 5,000 by 0.4, to get electrons? In which case, this is 2,000. Also, why would I need to use calibrated data here? 4. Ratio of read noise to background noise should be at least 3 but preferably 5 or more. This means that you need to make your exposure long enough so that background signal (LP level) reaches about 380e (3.9 x 5 all squared). This is where I'm really lost. I don't understand how 'the ratio should be at least 3 but preferably 5' equates to '3.9 x 5 all squared'. I understand that the 3.9 here is the read noise at gain 0, so I would substitute 1.4 here, but I still can't quite make the leap from step 3, to this one. So, what would be my next step after step 3 above, given the data I already have?
  19. I think we've reached the point now where I say "I like kittens" and leave the room...
  20. Outstanding as ever, thanks vlaiv! Another thing I'd add is that, when shooting multiple targets in a night (which I have done before), you can shoot until the meridian flip then go to another target instead. Makes for more productive time than waiting for the target to pass overhead. I have a planning spreadsheet that calculates the flip times so I can work around them this way. Thanks again, all good advice.
  21. Interesting. So, let's say I shoot the same sub length from horizon to horizon throughout the night. The stacking algorithm weights the subs that are overhead significantly more than those at either horizon. Doesn't that mean I'm effectively wasting data at the horizons? If I could adjust my sub length or even gain/offset to get better data at the horizons, then wouldn't that be more efficient? Or, should I just shoot above a certain altitude as a general rule, maybe shooting two or more objects a night as they enter 'the zone', so they have less variance? In other words: what do you do? Actually in practice? I totally agree with the theory, I just want to know what I should be doing, if anything, to maximise my shooting time.
  22. Hi all, Quick question that will probably spark off long answers! How do you handle shooting objects at different altitudes, including as they ascend and descend throughout the night? Do you just not bother below a certain alt? Or, do you shoot different sub lengths as it rises then falls? Or, do you shoot the same sub length regardless? Or, something else entirely? Thanks Brendan
  23. Fabulous. The focuser hasn't been a problem at all (SkyWatcher autofocuser with HiTec DC Focus controller) so I'd be reluctant to change that. I recently had pinched optics so moved the primary clips out a bit, and I'm wondering whether the mirror is now too loose. So, I'll check on that, plus be very rigorous about taking flats. Hopefully that will resolve the issue. Thank you again.
  24. Absolutely brilliant analysis, and far ahead of anything I could have done. Thank you so much! So it does look like there's a mismatch between the flats and the subs, and this is what someone else has told me since too. I use a Lacerta panel, but it was intended for a 150PDS, not a 130PDS, so I'm wondering whether I need to do something to try and make sure it's absolutely central (I thought it was because I cut out some of the inner foam ring to position it, but maybe not). Also, I'm going to look into covering around the panel with a scarf or something, in case there's light leakage into the focuser tube (which is covered anyway, but I may as well try it). The scope is flocked, so I doubt there will be internal reflections bouncing around. Very interesting what you say about Siril. I've found that APP's light pollution removal tool is superior to ABE or DBE, but I'd rather not rely on APP as I'm transitioning to PI (I just have a rental license for APP). I've used Sirilic for stacking but found the Siril GUI a bit opaque. If its background removal is as good as you say, perhaps I should take another look. Pity PI's 'industry standard' tools aren't up to the job! Great to know I'm not absolutely way off, and the situation is salvageable in software. I've since processed them and managed to get something out of them, but I'd much rather fix this at source. The data looked good to me, and the guiding was excellent. The camera is still in the scope with the same orientation so I might even have a go at creating some new flats (both images used the same flats), which might be a way forward. Finally - and this is the million dollar question - is there anything else you can think of that could be causing this mismatch? No worries if you think I've covered everything. I really, really appreciate you taking the time out to help me here. I'm sure you know (as we all do) about the love/hate relationship we have with astrophotography. Ecstasy when it works, agony when it doesn't! Cheers, Brendan
  25. Hey, thanks! Really appreciate you taking the time to do this. I've kind of managed to fix it too in the meantime, but I'd very much like to fix it at source. I'm working on the theory that my flats don't seem centrally aligned, whereas my lights do, so something isn't right about how my flats panel is located on my scope. Or something. Anyway, thanks again.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.