Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. No, RASA does not have diffraction limited optics and you really want between 4 and 5 um pixel size for it to properly sample its images. In above spot diagram, you can see that sharpest part is 550nm in center - and it has spot diagram of 3.6um. Add seeing and mount tracking to that and you'll easily have FWHM of say 4" in RASA8 images - and that corresponds to about 2.5"/px or ~4.85um pixel size.
  2. Not sure if this should make the top 10 ten but - here are my first 3d prints Actually those are 4th and 5th prints - I printed x3 20mm calibration cube with white sample PLA that I got with machine - but I count those as "getting to know you" prints . Stock Ender 3 V2 with updated firmware. Creality CR-PLA (PLA+) in black color (obviously). I'm rather happy with first prints - only issue is first layer and I guess too much bed heating for PLA - I had issues with Elephant's foot and slight corner warping. Looking into getting BL/CR touch probe to simplify things.
  3. That is very surprising result. I thought that TS 99% dielectric diagonals are good quality.
  4. I actually managed to read first version and were about to ask - did you mean that as the animal or one of 7 deadly sins?
  5. Thing is - moving manual mount with either slow motion controls or simply pushing it by hand creates vibrations. You don't want constant manual tracking - you want to manually position it at the path of object and then let the object enter the FOV while you are filming but the scope itself is stationary (no shakes that way).
  6. I've seen some images of ISS taken with large manual dob. I was seriously impressed - but I don't think it was manually tracked - but rather - push / capture fly thru, push / capture fly thru and so on multiple times to accumulate enough frames for stacking.
  7. I did not read the whole paper, just glanced over it, but here are few interesting points . Value replaced without giving any explanation as to why - except for simple statement - Value is this, instead of one given in previous equation. Since some values are approximations - well, that allows us to choose different but similar number and provides us with justification of completely unrelated conclusion.
  8. I did some digging and this is what I've found: Original paper: https://www.torustech.com/wp-content/uploads/2020/08/Resolving_the_Vacuum_Catastrophe.pdf Published with Scientific Research Publishing https://en.wikipedia.org/wiki/Scientific_Research_Publishing which is itself: It is also worth reading wiki article on predatory publishing: https://en.wikipedia.org/wiki/Predatory_publishing
  9. I really can't follow the math in there as it makes no sense to me. Take any combination of written numbers and try to get 0.265. By the way - that was the old value, new refined value is in fact 0.2589±0.0057 In fact - try to match any of written numbers to one of these: https://en.wikipedia.org/wiki/Lambda-CDM_model#Parameters
  10. Oops, we did it again? Well, my contribution would be - largest sensor, cooled affordable model ...
  11. I think that most convincing implementation is in AS!3. There is little if any difference in level of detail of planets in OSC vs mono if both are properly sampled and conditions are good.
  12. Not sure how AHD works - but if we observe naive linear interpolation method - then colors don't mix in interpolation stage. Missing red pixels are calculated by neighboring present red pixels - same for blue and same for green (with exception for green being that different pattern of surrounding pixels is used to calculate missing values).
  13. In any case - OSC can be made to operate on "full" resolution implied by pixel size - in case of astronomical imaging where images are stacked if one uses Bayer drizzle instead of other debayering methods (provided of course one is not over sampling and that Bayer drizzle is implemented properly).
  14. I think you are right. Sampling itself does not depend on orientation and resulting image, interpolated or not. Interpolation itself won't increase detail beyond sampling rate, but in this particular case - no resampling happens, so green pixels that are captured remain the same. As far as luminance goes - luminance is some linear combination of raw components (coefficients depend on actual sensor response) and as such will have high frequency components of green which will be missing in blue and red - so this arrangement acts as sort of low pass filter (actually aliasing artifacts will occur if there are high frequency components in red and blue as these will be under sampled).
  15. @Adam J Very interesting point - if you rotate grid by 45 degrees then green pixels are indeed spaced at 1.414 pixel sizes apart, however, no debayering algorithm exploits this. In above image - "spatial position" of debayered pixel is marked wrong. If interpolation debayering is used (like linear or VNG or other) then every pixel of the grid is position of debayered pixel - but that does not mean that frequency of samples is altered. If super pixel / split debayering is used - then pixel position can be anywhere you like in 2x2 group - it does not matter as separation between resulting pixels is x2 pixel size. No debayering algorithm rotates green and exploits the fact that they are spaced like that.
  16. Not quite so Fractal sets are known for the fact that they have "volume" but don't have "surface" (well defined one) - or rather that "surface" is infinite in size.
  17. How did you arrive at that figure - x1.5?
  18. I'm not really sure proton has a volume and surface area as such. When we talk about radius of a proton we talk about charge radius, to quote from wiki: At best we can say that proton is a "fuzz ball" of volume without clearly defined boundary and in that case surface does not really have sensible meaning. Maybe this video on proton structure can explain it better why there are fundamental problems with assigning volume and surface:
  19. No. I just remembered that OpenCV is used in robotics and security applications and that it can do object tracking (there are bunch of videos on that subject on YT - I found short one - just for demonstration). I don't think it would be hard for anyone who is good at programming to develop software to direct the mount depending on this input. There are a lot of examples in Python (or other languages like C++). If I was interested in doing something like that - here is where I would start: - doing some basic calculations on aircraft angular speed depending on height, actual air speed, direction of travel and position relative to observer (worst case scenario is directly overhead at low altitude and high air speed). - I would asses attainable and expected resolution (aircraft image size in terms of pixels and angular size based on size of aircraft and its distance) and expected exposure length to suppress motion blur. - would then look if regular stepper motors can provide that sort of tracking speed and precision on something like SkyTee2. That mount has needed payload capacity, is affordable and can easily be equipped with stepper motors (it already has slow motion controls with reduction). That is of course DIY approach - but does not address original question - ready made solution. My only concern is that cheap part of original question can only be addressed with DIY.
  20. I'm not so concerned about mathematics side of things - but rather does it have experimental backing. You can make all sorts of consistent mathematical models, question is - are they describing reality and how to check that.
  21. That is interesting piece of kit. I'm not sure that I would go that route though. Today it is much much cheaper to do visual tracking. Simple guide scope with guide camera (relatively wide field) and a bit of software based on OpenCV can be used for tracing of object. Problem is precision / stability and speed of tracking of the mount itself.
  22. I don't think you'll find anything cheap and reliable for this purpose. Tracking mount needs to know trajectory in order to track objects. Either that, or to have some sort of guiding / feedback on object position. Best place to start is to calculate angular speed of airplanes depending on their distance / altitude / direction and speed. You also need to consider tracking "resolution" / "precision" (think of it as stepper motor steps or servo motor encoder resolution) and max speed at that resolution. I think that you will find that stepper motors are not very suitable for the purpose as they have slow max speed and that high resolution encoder servos are not cheap.
  23. No. It really makes no difference in short exposures used for solar / planetary work.
  24. Very nice capture! Many people think that Maks are slow for DSO imaging, and this should prove that it is very usable instrument.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.