Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

furrysocks2

Members
  • Posts

    1,216
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by furrysocks2

  1. Yeah. I calculated the inter-micrcostep interval and nobbled the stepper driver demo code, haven't revisited. As it happened, my quick and dirty sidereal tracking rate is almost bang on for lunar tracking. I'll get the other stepper wired in and house it all with sensible hookup. Intending to go down the OnStep/ASCOM route. May need to revisit stepper supply before I fry either of the drivers. I might look to buy stock items if I get any money at Christmas time.
  2. Another go at M33... What a pain... I've got a Datyson T7M/ST102/EQ3, no finder, RA stepper on an Arduino with no RTC or timer interrupt, just a sleep/step/sleep/step loop. Tracking kept dropping out, but I twiddled the current limiter and it seemed to settle down. As before, 2x2bin max-gain ~1s exposure to focus (not very successfully) and snapshots from SharpCap with a script monitoring the output folder, plate-solving and displaying FoV in Stellarium... twiddle the dec slomo, loosen the RA clutch to reposition, lock it up, re-snapshot, wait until Stellarium updates, repeat until I find M33. I didn't polar align more than being able to see Polaris while sitting on the ground. That, and the poor timing on the Arduino, still means that darkness creeps up from the edge of the stack. Anyway... 30s 66 gain, 10 darks, live stack of 10. Only real difference this time is I took it 16 bit. I'm pushing my shonky setup at 30s, stars are elongated, but that's ok cause they're not in focus anyway. Dropped resolution to compensate. Levels twiddled in Gimp 2.9. Tonight, I give up. I don't care about Auriga. Too much like hard work.
  3. Half of the Double Cluster Datyson T7M on ST102, 30x1.5s. Tracking this time.
  4. Just getting the target in view and letting it drift, stop live-stack, save it, then reposition scope and start another stack... I'd sometimes have to take a snapshot and re-solve to see in Stellarium where I'd ended up if I'd fiddled around on the laptop for too long, as I didn't level or align the mount other than turning it more or less the right way round. FOV is 30 arc minutes or something like that. I could see the center fuzz of the galaxy in the preview so I put it off center on frame so that it drifted through the middle of the frame to the opposite side. I think I got about 20 seconds each time but watching the live stack, blackness creeps up from the borders and eventually SharpCap can no longer align new frames to the stack. It's a start, but you're right, room for improvement. I'll should put the effort in to get the RA motor working - it's in place but not hooked up to anything. Then I can use 16bits, no binning, lower gain and longer subs. It's quite labour intensive otherwise!
  5. Thanks. That was the plan. Gain set to 100, 8 bit and 2x2 binned, though. If I can get decent pictures with just an RA motor and camera set to somewhat lower gain, max 10s exposures and 16bit full size, I should be content for a season or two. Need to tweak my script for faster solving - hinting fov and recently solved coordinates, and write an auto-snapshot script for SharpCap. I'm getting off-topic.
  6. Didn't realise how dark this looked, different monitors can make things look so different... so here it is again after a bit of Exposure/Black Level in Gimp. Probably looks rubbish on the other monitor.
  7. ST102 on an EQ3. No motors, no finder. Just a laptop, a cheap guide cam and a script I wrote to link Sharpcap to Stellarium... after revisiting the double cluster which I missed by a frame's width last time, I zoomed out in Stellarium and looked around for DSOs. I got myself close, wasn't easy... click, wait, tweak, repeat... I ended up here-ish... Rotated the camera, and live-stacked 2s exposures until drift meant Sharpcap wouldn't find the alignment stars anymore, tweak the RA handle then repeat. I took four of these stacks and fired up DSS for the first time. Total integration time, 70s. My first galaxy.
  8. /** * Adaptive Digital Pixel Binning * https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4541814/ * * G 8bit input image * F 8bit output image * W image width (eg 640) * H image height (eg 480) * Rb maximum binning ratio (eg 4) * lambda noise suppression sensitivity (eg 16.0, 1.0) * mu pixel depth?? (eg 255) * * #include <algorithm>; // std::min, std::max, std::sort */ struct AbsCompare { bool operator()(int16_t a,int16_t b){return abs(a)<abs(b);} // compare by absolute value }; void adpb(const uint8_t* G, uint8_t* F, int W, int H, int Rb, double lambda, int mu) { double* const HG=new double[W*H]; // allocate 3x3 average buffer double max=0; // find max average value for(int y=0;y<H;++y) for(int x=0;x<W;++x){ // iterate all pixels const int L=(x-1+W)%W,R=(x+1+W)%W,U=(y-1+H)%H,D=(y+1+H)%H; // wrap at edges const double avg=(G[U*W+L]+G[U*W+x]+G[U*W+R]+ // convolve 3x3 average kernel G[y*W+L]+G[y*W+x]+G[y*W+R]+ // ... G[D*W+L]+G[D*W+x]+G[D*W+R])/9.0; // ... if(avg>max)max=avg; HG[y*W+x]=avg;} // find max and store average for(int y=0;y<H;++y) for(int x=0;x<W;++x){ // iterate all pixels const double hg=HG[y*W+x],t=hg/max, // average/fractional pixel values r=1+(1-t)*(Rb-1); // optimal binning ratio const uint8_t g=G[y*W+x]; // center pixel value const int L=(x-1+W)%W,R=(x+1+W)%W,U=(y-1+H)%H,D=(y+1+H)%H; // wrap at edges int16_t d[9]={g-G[U*W+L],g-G[U*W+x],g-G[U*W+R], // calculate differences g-G[y*W+L],g-G[y*W+x],g-G[y*W+R], // ... g-G[D*W+L],g-G[D*W+x],g-G[D*W+R]}; // ... std::sort(d,d+9,AbsCompare()); // sort differences, abs(a)<abs(b) double bc=0,bu=0; // accumulators for convolution for(int q=1;q<=9;++q){ // iterate sorted differences const uint8_t s=g-d[q-1]; // original pixel value const double contrib = r-(q-1); // calculate contribution bc+=contrib>1?s:contrib>0?(s*contrib):0; // convolve context kernel bu+=(q==1)?s:(((r-1.0)/8.0)*s);} // convolve uniform kernel const double gamma=abs(hg-g)/lambda, // combination coefficient b=((1.0-gamma)*bc)+(gamma*bu), // denoised pixel value w=(1.0/mu)*((b/(Rb-1.0))+(g/2.0)), // blending coefficient f=(1.0-w)*(b)+w*(g); // blend for anti-saturation F[y*W+x]=std::max(0.0,std::min(255.0,f));} // final pixel value delete[](HG); // clean up } Edit: DON'T call with Rb==1, lambda==0.0 or mu==0.0! And don't feed it a frame with all pixel values of 0.
  9. I came across this paper the other day... "Low-Light Image Enhancement Using Adaptive Digital Pixel Binning", 2015. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4541814/ I've been playing with JamesF's oacapture, adding the facility to boost the brightness of the preview image for help focusing or a bit of EAA-type stuff. I initially added pixel value multiplication (ex x2, x8, etc) and pixel binning (eg 2x2, etc). The former amplifies noise as well as signal and the latter reduces spatial resolution. I reinvented digital pixel binning which preserves spatial resolution but smooths details by low-pass filtering. The paper linked has this image to describe the two: Not one to hang about, I gave implementing their algorithm a go - it goes beyond the simply digital pixel binning (b) in the diagram above. It's a couple of hundred lines of C++, mostly comments. I've only tested it using the PS3 Eye webcam which is currently sitting in the dark pointing at a hank of paracord... This was with the camera set to 3fps, low gain, low-ish exposure, just enough so that I could test 4x/8x routines within the available brightness range. Here's what it looks like multiplying all pixel values by 8: The 2x2 digital pixel binning does a job of removing some of the noise but loses a bit of sharpness. The ADPB algorithm does a better job still: The algorithm consists of four steps: calculate the optimum amplification ratio for each pixel based on a 3x3 average kernel (brightness adaptive) calculate a binning pattern based on neighbouring pixel values (context adaptive) blend a uniform binning pattern to reduce noise (noise adaptive) blend in the original to remove saturation It does this with a single pass over the image, inspecting neighbouring pixels and doing it's thing. ... no, convolution of 3x3 averaging kernel, find max pixel, then a single pass for everything else... that's three. Another comparison with an improved image to begin with: And a closeup of the earlier comparison: I don't know what's in other software or if there are simpler algorithms you could run to despeckle or what - I've not done any image processing really other than AS!2-the-moon. Just learned about darks in SharpCap, but this looked pretty neat and wasn't that hard to implement once I got my head around their mathsy notation. Is this interesting to anyone?
  10. I'm only really intending on using it for lunar, and for learning the basics of long-exposure processing on bright DSOs. I don't think I'll be guiding with it any time soon. It's probably comparable to the ASI120MM in every way, but I don't have a genuine one to compare it against. Nor have I an SPC900NC to compare to. I would like to put it in a finder for a live finder view, plate solved and coords shown up in Stellarium or similar, so I hope it's ok for that. Couple of my first wide-field pics here: https://stargazerslounge.com/topic/301405-my-first-dso/ If you're in no hurry, I'll let you know more as and when. Otherwise, you might just have to take a punt on "like an ASI120MM".
  11. Downloaded the native ASI driver for Windows, seems to run fine in SharpCap as "ZWO ASI120MM". Laid the camera with the wide-angle lens it came with down on the carpet. Only light in the room was the laptop, facing away. Gain at 50 and let it auto expose... settled at 30s exposure. I never took a screenshot at that, but immediately captured a 4 frame dark and applied: Then a live-stack of 4: Easily the best camera I've ever had. Imagine this is primitive stuff to a lot of folk, but "firsts" are always fun!
  12. Reports in syslog as: Downloaded libuvc, libasicamera and oacapture from http://www.openastroproject.org/downloads/. Camera works! But oacapture doesn't fit on my screen! Ran a 10s exposure with max gain, just a snippet scaled x2:
  13. Came across the T7 on aliexpress: https://www.aliexpress.com/item/T7-Astro-Camera-Astronomical-Astronomy-Planetary-High-Speed-Electronic-Eyepice-Telescope-Digital-Lens-for-Guiding-Astrophotograp/32786715355.html Active thread in French just now: http://www.webastro.net/forum/showthread.php?t=149406 I'd like something for the moon, so uncompressed frame rate is important. Quotes exposure time up to 1 minute, so that's something else to play with. And it could guide when I get something else. It's a clone with no warranty I know, but wondering if anyone had experience with one?
  14. I took this tonight - it took me a few goes at processing to get this one. Undriven 8.5" f/7.6 newt, PS3 Eye camera (640x480 @ 75fps) - 13000 frames, PIPP ROI ~400x400 down to 10000 frames, AS!2 10% drizzle 1.5x, then first time using ImPPG to sharpen.
  15. That's awesome! First ISS image I've seen, so much detail.
  16. This was one more graph I had in mind for Mars... shows altitude by time (6pm to 6am, taking account of sunset/rise). No great surprise that opposition max-altitude can be observed at midnight, though I did have to think that one through for a second or two.
  17. I got my scope out of storage in November with the intention of concentrating on planetary observing. :/
  18. Cheers - hope they're correct. Anything they don't tell you?
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.