Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

ONIKKINEN

Members
  • Posts

    2,414
  • Joined

  • Last visited

  • Days Won

    5

Everything posted by ONIKKINEN

  1. This version seems to have done the trick with more familiar numbers for e-/adu and full well. Not seen a full well test go over 16.7k on mine yet, and usually gain 100 reports closer to 0.25 but this is pretty close to what i assumed to see. Ran it cooler this time, but i dont think this would affect the measurements greatly when dark current is still fractions of an electron per minute. Just because i was curious, i ran the test with my 678MC too and it worked well without anything special to note. As general feedback i do like how you have allowed testing only specific gain values, like what i did here to get a more accurate readout around the point where the camera goes into HCG mode (i believe 182). Interesting to see that the random telegraph noise is almost absent with this one.
  2. My secondary is undersized so vignettes maybe 20% at 1x, but actually the paracorr makes this much better with its 1.15x barlow so there is less vignetting than native (reducers vignette more, extenders less). Just some food for thought. The paracorr is more expensive though so always a compromise around the corner.
  3. The DEC drift here is due to just that. But its not a real issue as it seems very slow here. Realistically even with a 1 arcsecond polar alignment you will still have that drift because there is probably some cone error in the setup. Long story short a polar alignment of a few arc minutes is as good as it needs to be when guiding. No harm in going for more accurate one but also likely no gain either.
  4. This video shows your RA periodic error in left-right movement and slight DEC drift in vertical movement. This is definitely not fixable without guiding, balanced or not. I would try to seek an answer as to why guiding had not solved the issue. So are the basics in check with guiding? Focal length and pixel size correct, calibration at the right place in the sky and so on? Also is backlash adjusted to be tolerable and the mount in good shape otherwise? Simple RA guiding should not be too difficult if the gear is set up properly so there might be something you have overlooked.
  5. This is a decent way to treat slower newtonians too. Mine is f/4.4 natively and a bit over f/5 with a paracorr and its not easy to get good stars corner to corner. However if i crop to a micro 4/3 sensor size (294,1600MM) then in 95% of the cases images are great corner to corner. Newtonian + APS-C is quite demanding i would say. That is if one is a pixel peeper, if not then its workable for sure.
  6. Flat panel is a plastic toy 25€ drawing tracing panel lit by LED strips behind a dew diffusion layers. Far from ideal but all channels calibrate well despite the dodgy levels so have not thought about that further. I will run another test later today, or tomorrow if i dont have the time. Got an idea, will run the test with another OSC camera (ASI 678MC) to see if there are similar issues and we can rule out some funny business with drivers as an issue.
  7. Regarding those flats, something came to my mind just now. I have had doubts whether the old version of NINA and the Touptek driver in it engages the low noise mode since there is no toggle for it. So if there is something odd about those flats, that would probably be it. So might not be the most useful test subject.
  8. Word of warning for the painted mirrors: The coatings in my secondary mirror (OOUK hilux coatings, so not the exact same as yours but anyway) have started to come off around the edges after i blackened them much like you did here. I cant be sure that the paint was the thing that did it, or the frequent dewing, freezing and thawing that the mirror undergoes around the edges. But anyway i only noticed this after having painted the mirror.
  9. Here are 2 flats from my most recent session: Dark current at +5 shouldn't cause a jump so high in noise levels with the camera since it is so low to begin with but maybe since that is different between our runs. I will run it at a cooler temperature next to make sure this is not causing issues. For the rest of the issues it would be enough for the gain to be calculated incorrectly to also get the wrong read noise and full well values since those are derivative values. I already knew that the camera has little bit less than 1e read noise and around 16.5k full well capacity at gain 100 with HCG and low noise mode on so very similar to your mono measurements.
  10. Well, this is the part that leaves me stumped because in what way is this implied? How is it that a light moving in a way you cannot explain must mean it is being controlled? There is some bias here in that i think you want to believe it is being controlled, when in reality there is not an atom of evidence to support this.
  11. Balancing slightly east heavy could help with this, but most likely this was a minor part of the issue unless you were at the perfect point of balance where your RA axis wobbles between extremes of backlash (very unlikely). You will need to guide to get rid of the RA periodic error and the issues caused by it.
  12. While you did not say this, be honest, the intent was there and easily readable between the lines. All this talk of "input" and "piloting", "AI controlled vehicle" and so on (in your further comments), but i thought these were supposed to be unidentified aerial phenomena and not identified as an unknown object which is being piloted or identified as an object that is capable of being piloted. Or even identified as an object! The only data point in the whole description was a couple of lights. There is no reason to assume anything about some strange light if we are to agree that the thing was unidentified. So your intent on assuming this was an alien thing was very clear from the comment.
  13. Ran another test: Worked well this time with regards to the exposures and linearity. I ran the dark current test this time and noticed something which may not be intended. The first 10 times it took the test exposure it came up with an error of something along the lines of "could not get fullwell from driver". Did not think to take a snapshot of this but it was something like that. The next 10 exposures were without this error, so was it intended for this test to run twice or did it run twice because of the error in getting the fullwell from the driver the first 10 times? Another thing to note is that the e-/adu, Read noise, and full well values reported by the test are all around 30% extra from what sharpcap and the RisingCam site itself report (actually they removed this info at some point so no longer reports anything) as the values so maybe something needs a tweak still. Or could be that the other tests are wrong, anyway i am not educated enough on that to say yay or nay but your mono version reports numbers what i would have expected so maybe this is some OSC camera issue still.
  14. Why not use a USB stick? Why do you have to have the subs on your main PC the instant they are taken? Seems like an extra complication with no real benefit to me. At least personally i would try to simplify the process a bit. The connection between the scopeside pc and the control device needs to be only so good that an image can be displayed, which doesnt need to be high speed. I would suggest that you simply collect them with a USB stick and transfer them to your processing PC after the imaging run has completed.
  15. Try Telescopius for suggestions of what to image at the moment: https://telescopius.com/ You can also browse Stellarium and try to find something to image: https://stellarium-web.org/
  16. There is also the new Quattro 150 from Skywatcher, and this also comes with a reducer that shortens the focal length to 518mm. Still doesn't compete with the widest fracs, but its getting there in terms of what sorts of targets fit in the FOV. But from what i have read, and seen in YouTube videos of this scope it is similar to the rest of the cheap Skywatchers so ok but not fantastic for imaging and likely requires some work to get going.
  17. Certain types of imaging can be cheap, or at the very least not too expensive. Planetary and lunar for example, tracking is optional and if you have that then it doesn't have to be amazing like for DSO imaging - just good enough to keep the target in the chip. Planetary cameras are very affordable too, as opposed to larger sensors. Widefield with something like the samyang 135 can be "cheap" too, but the lens itself actually costs a fair bit of money. Cheap trackers probably ok with a lens so wide, but even with the price of just the lens, a cheap tracker and a second hand DSLR we are crossing the 1000 mark in cost easily. So is that cheap? Astrophotography has twisted my view of money so i would still call that cheap, others may disagree. Now a type of imaging that i think cant really be done cheaply would be distant DSO imaging, so not the Orion nebulas and cygnus loop type targets, but the galaxies that are measured in arcminutes instead of degrees, the galaxy clusters with dozens of small galaxies and so on. I really dont see that type of imaging done affordably at the moment because you need a large aperture, which means a heavy scope that needs a sturdy mount, and then ideally you need a sensitive modern cooled camera to make the best out of the handful of photons that actually are from the target object. So this type of imaging even in a budget sense will be several thousand £/€/$ easily.
  18. NINA is an allround great control software. Free and open source, works with just about anything. I think the camera may need to be connected via USB though, not sure at all about using via wi-fi. If you manage to get things going with a computer then reaching the absolute best focus possible becomes so much easier so i do encourage at least attempting that. * NINA link : https://nighttime-imaging.eu/
  19. I see you added the no computer part, so disregard the HFR thing then. So probably the best way would be a bright star near the intended target then. Using a bahtinov mask or not, up to you. I tried with and without when imaging with a 550D and i preferred not to use the mask in the end and instead look at the star itself on high zoom.
  20. I see you are using a DSLR, so if you are worried about the shutter count increasing then do just that with live view and a bright star. If not then ideally you would focus on the target itself with measured HFR readouts.
  21. If you are imaging with the help of a computer of some kind (so anything but a DSLR screen) then i would suggest focusing without a bahtinov mask, or the Moon. Both cases can be less than ideal, the Moon has the issue where you have to slew your scope to the target afterwards, which can cause slippage with modest focusers. With the bahtinov mask its easy to misinterpret just how central the line is. Instead i would recommend focusing by simply looking at reported HFR/FWHM values. The workflow is very simple, just loop exposures and adjust focus to either direction. Once the values no longer get smaller you are at ideal focus. Repeat once you see HFR increase during the night. Of course relies on a computer to read the values.
  22. @vlaiv does have a point. Either way could be true and we have no way to know at the moment with our limited sample size of planets with life on them (1). I am sure most know of the drake equation, which attempts to answer the question of how common life could be. https://en.wikipedia.org/wiki/Drake_equation We can answer the first 2 questions with some certainty and get a value, which would be that most stars if not all of them will have planets around them. However maybe only 10% of stars in general are of a type that have stable enough lifespans to encourage life to grow and prosper (but also this is unknown and pure hypothesis). The 3rd question is still an open one, because our methods of finding exoplanets at the moment are quite biased to finding large planets around small stars, and not really suited best to finding earth like planets around larger stars. However we still find planets that could be rocky and in a suitable distance for life so a partial answer. Everything else after that question is at the moment not at all answerable with any kind of certainty since we have just the one planet to sample from. It could be the rarest thing in the universe, or it could be common if given a billion years of stability for the system, no way to know. Personally i think the numbers game should still play out to life's favour here. We could be the only planet with intelligent life in our galaxy easily, but there could also easily be some single celled organisms somewhere else. Expand the question to the entire universe and i think its likely there are intelligent lifeforms out there somewhere. However a distant galaxy might as well be another universe because there is virtually 0 chance that we ever have a meaningful connection with them due to the distances involved.
  23. Actually, disregard the read noise thing, i had low noise mode off when loading the ASCOM driver. I also did a little MacGyvering of my flat panel by adding painters tape and some printer paper to tweak the panel to be a bit more neutral. The exposures are still clipping for higher gains, so no cigar yet. Needs some tweaking for OSC cameras for sure. Also, on another test run i ran into an issue with the tool being unable to find a suitable exposure, even though it was sitting at 70% which is exactly what it requests.
  24. Had some issues running the analysis, i think it has something to do with the chosen automatic exposures being clipped/too bright for the blue pixels in my OSC camera (flat panel is quite blue, not a problem normally). I mean the analysis itself was very straight forward and completed just fine, but the results are not in agreement with sharpcap or manufacturer graphs. Example of one of the exposures i managed to get a snapshot of. Loads of them were clipped for the right most peak which i presume is the blue channel. And the results, which i think may not be too valid if the test exposures were clipped: Linearity clearly hits a wall (presumably because of the clipped blues), read noise is also much higher than what sharpcap or the manufacturer report. I expected results very similar to yours, but i think this test did not complete properly. I think the desired exposure level could use some adjustment for cases where there are several peaks like with OSC cameras and non-neutral flat panels (most of the cheap ones), relaxing it to 60% or 50% might yield much better results, or allowing the user to select the median target.
  25. Its just luminance, which could be a simple UV/IR filter or similar one where all of RGB is passed. You add RGB data to luminance (or vice versa) to create an LRGB image. The luminance image contains all the details of the image, and contributes the most to the overall signal to noise ratio of the combined image. Common way to combine would be a 3-1-1-1 ratio where luminance equals R, G and B combined, so as an example 2h of L and 40min of R,G and B each. The resulting image would be better than a 4 hour pure RGB image with a colour camera, by how much will depend on many things but still should be better.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.