Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

vlaiv

Members
  • Posts

    13,031
  • Joined

  • Last visited

  • Days Won

    11

Posts posted by vlaiv

  1. I just checked my 8" Skywatcher dob - and it is indeed M54.

    However, it seems not to be universal thing:

    https://www.cloudynights.com/topic/782264-sky-watcher-2-draw-tubes-thread-size/

    mentions M57/M56, and from the images you posted - I'm inclined to declare it M60 on your scope.

    Do however be careful when measuring threads - you did ok in above image - you need to measure male thread - and be careful not to have caliper sit in thread groove - but rather measure thread ridges (from ridge to ridge). Due to tolerances - you will measure slightly below nominal major diameter.

    If you did it like that - from your image - it simply looks like it is M60 (and do check spacing between threads - you can do that with calipers as well - but it's better if you have thread gauge for that).

     

  2. Common wisdom says that you should go for 2-3mm exit pupil and lower magnifications.

    You have fairly slow scope that is easy on the eyepieces, so many will perform nice.

    I'm inclined to point you to Explore Scientific 62 degree line - 26mm. It is in your price bracket and as far as I'm aware (not owned one) - it is very comfortable eyepiece with 20mm of eye relief.

    On the other side of this spectrum would be eyepiece around 18mm - so this one might be an option:

    https://www.firstlightoptics.com/stellalyra-eyepieces/stellalyra-18mm-ultra-flat-field-125-eyepiece.html

    • Like 1
  3. If I remember correctly - clutches consist out of two parts:

    bolt that is used to apply the tension and little insert that is actually pressing the shaft.

    At least that is the case with HEQ5 mount

    Maybe you should take things apart and see if similar inserts are stuck due to corrosion or something?

    image.png.f1dc657dbe61cfc2f788567db18b979c.png

    (above is screen shot from youtube video showing both parts after disassembly https://www.youtube.com/watch?app=desktop&v=rsr7m1ejjVU - you can also see it at 1:11:32 being put back in place - little disk with clutch screw being inserted after)

     

    • Like 1
  4. Important thing to note is that this model does not have equations like we are used to.

    It can only be solved by numerical methods / via computer because it is set of differential equations without nice analytical solution.

    Observational data is fed into computer program and best fit is produced. This best fit yields certain functions and numerical values for constants that we have as "solution" to the model.

     

  5. 1 minute ago, gajjer said:

    To clarify things for my simple mind. Are you saying that you take an image with binning at 1x1 and then, in processing it, you choose to bin 2x2 ( or whatever ).

    I was thinking that you were binning at the time of taking.

    cheers

    gaj

    I personally don't mind either - or in fact prefer binning at data reduction stage - because there is technique to do it that is slightly better than using "firmware" binning.

    However, when I say binning at data reduction stage - it is performed after calibration and before registration and stacking. It can be thought of as part of calibration stage.

    • Like 1
  6. 6 minutes ago, Adam J said:

    I would apply noise reduction as a first step following stacking then resample.

    I would advise against this as stacking would not make sense after you alter noise distribution.

    In any case - why don't you simply try it out on existing data you have? Make comparison between two approaches.

    Maybe best way to do it would be to create "split screen" type of image. Register both stacks against the same sub so they are "compatible" - prepare the data one way and the other and compose final image prior to processing out of two halves - left and right copied from first and second method.

    That way final processing will treat both datasets the same as you'll be doing it on single image.

    Alternatively - if you can't make it work that way - just do regular comparison - do full process one way and then the other.

    I'd be very happy to see the results.

  7. 4 minutes ago, Adam J said:

    yes, it's detrimental when you are undersampled and you loose detail in bright areas to gain SNR in faint areas of the image. Something that occurs in wide field imaging quite allot. The point being in wide field you are almost never limited by seeing. 

    My example is that when I processed my M45 wide image at 180mm focal length, the surrounding dust looks better if you bin the image as you might expect. But in binning the whole image you loose detail in the core of M45. 

    Yes it would be possible to bin an image, resample back up to original level and then mask the brighter areas of the unbinned image back in but the AI noise reduction seems to achieve this with better granularity and minimal effort. 

    We seem to be having different concepts of what the binning is for.

    I see it as a data gathering step - integral into data reduction stage, rather than final processing tool.

    In my view - if you choose to bin your data - there is no sense in up sampling it. You don't up sample your data from regular pixels either, right?

    You choose to bin your data because you'll be happy with final sampling rate that binned pixels give you - as if you were simply using larger pixels in the first place.

    If you leave your image binned - it won't be any different than shooting "natively" at that sampling rate. It can loose detail versus properly sampled image - but that is not due to binning, same would happen if you compared two images taken with regular pixels - ones being bigger and under sampling and ones being smaller and properly sampling. Under sampled data will not show the same level of detail - simply because it's under sampled (not because it was binned - because in this case it was not).

     

    • Like 1
  8. 1 hour ago, Adam J said:

    Am starting to think that the only things that matters are QE, Read noise per unit area and sensor size. 

    And this is why, 

    I don't think software Binning for increased SNR is the way to go anymore. 

    With some things I agree 100% and with others 0% :D. I'll explain.

    1. Yes, QE does matter, but only up to a point.

    Nowadays, most cameras have very similar QE. There is really not much difference in 81% vs 83%. Transparency on the night of imaging and position of the target can have greater impact on speed than this. So does the choice of stacking algorithm if one uses subs of different quality. My reasoning is to go with higher QE only if it fits other criteria below

    2. Read noise is in my view completely inconsequential for long exposure stacked imaging where we control sub duration at will.

    Since we can swamp the read noise with selected exposure length and again CMOS cameras have much lower read noise than CCDs used to have - I again don't see it as very important factor. I would not mind using 3e read noise camera over 1.4e read noise camera if it fits with other criteria

    3. Sensor size is very important for speed - because it lets us use larger aperture while having enough of FOV to capture our target. I guess this is self explanatory

    4. Binning is the key in achieving our target sampling rate with large scopes and large sensors.

    Speed is ultimately surface_of_the_sky_covered_by_sampling_element * QE * losses in telescope * aperture

    In above equation there are only two parameters that can be varied to a greater extent - one is aperture and the other is sampling rate or sky area. Latter determines the type of the image we are after - do we want wide field image with low sampling rate or we aim to be right there on the max detail possible for our conditions. If we aim for latter - well we don't have that much freedom in this parameter. This leaves aperture. We can choose to go with 50mm or 300mm scope, but in order to hit our target sampling rate we must have equal range of pixel sizes - that we don't have. Binning to the rescue.

    I simply don't like AI side of things. We have very good conventional algorithms that do wonders as well if applied correctly.

     

  9. 2 hours ago, lukebl said:

    I think I’m even more confused now!
    The science of it baffles me. All I want to know is which camera is more sensitive, assuming the same optical setup, the 533MC-Pro or the 585MC-Pro?

    What Andrew there is saying is the same as what I've said - only from a different angle.

    Sensitivity of photo detecting element is surface area * QE. This is true for single pixel as well as for sensor as a whole.

    From that it can easily be seen that if you have two cameras with the same area of photo detecting element - their relative sensitivity will be determined by QE alone.

    This can be further expanded like this:

    - if you have two cameras with the same pixel size - then one with higher QE is more sensitive

    - if you have two cameras with same sensor surface - then on with higher QE is more sensitive in sense of total number of photons gathered - but in order to exploit that sensitivity - you will need to handle the data in particular way (it's not out of the box because you will need to overcome the fact that surface area is divided into pixels somehow)

    - above two statements can be thought of as conflicting and they are if you take them at face value - but first is true if no special processing is done and second can be true with special processing.

    Think of the following:

    5+5+5+5

    4+4+4+4+4+4

    Now if you ask - which one is larger - you can answer in two ways: 5 is larger than 4 and 24 is larger than 20 - in one case answer is first and in other case - second. This is equivalent to looking at signal "per pixel" and "per sensor" (sum of all pixels).

    • Like 1
  10. 2 minutes ago, lukebl said:

    So, the basic message is that the ASI 533MC-Pro, although it has a lower QE than the ASI 585MC-Pro, is more sensitive due to the larger pixels relative to the QE, and will capture objects of lower magnitude?

    Yes, provided it is used with the same optics and nothing special is done to the data.

    If you have the choice of telescopes you can use with your cameras, and you can bin your data - then it depends.

    If you want to get really faint stuff in "reasonable" amount of time, then this would be my advice:

    - figure out your working resolution in arc seconds per pixel (one that will capture all the detail you are after and will not over sample)

    - get the biggest aperture that will give you focal length in combination with pixel size you have and any binning factor that will provide you with wanted working resolution - and will have enough FOV to capture the target.

    (of course, consider all other variables like ability to mount the said largest aperture scope, costs involved, quality of optics and so on ...)

     

    • Like 2
  11. QE is just one part of equation - and given the number - it is the peak QE - which does not paint the whole picture.

    QE is percentage of photons that fall on sensor - on a single pixel that will be detected / converted into electrons.

    Say that you have pixel that is 3x3 um square. Not all of that 3x3 um is photo sensitive area - there is some electronics around it as well. Further - depending on wavelength of light - photons might miss photosensitive site - simply go thru it. Other photons scatter of material - they simply get reflected. Others still get absorbed without producing an electron.

    Light does pretty much what it normally does in everyday light - it's either reflected or absorbed or transmitted. Only some number of incident photons actually manage to hit electron and knock it into potential well.

    That is what QE is - if you have 80% QE at some wavelength - that means that out of 100 photons - roughly 80 (or rather on average 80 of them) will be converted into electron.

    Back to beginning - number you see quoted is peak QE. Actual QE curve often looks like this:

    image.png.74cdf43f3d406f695741065be895a6f1.png

    This is relative QE curve and highest point in it - marked with 1.0 is actual 80% QE that has been specified for camera. This means that if you want to get actual QE for say Ha line - which is 656nm - you need to read this chart at 656nm (x axis) and then multiply that with peak QE. So we have 80% x 0.84 = 67.2%

    Above camera has 67.2% QE at hydrogen alpha line if peak QE is 80%.

    In the end - QE is just one parameter of how sensitive camera is. Another is size of pixel (but we have to take into account focal length as well - and that is where things get complicated).

    If you take the same optics and you attach two different cameras to it - actual formula for which camera is more sensitive when paired with said optics is:

    area_of_pixel * QE_at_wavelength

    In another words - if you have camera with 3.8um that has 50% of QE at Ha line and you have camera with 2.4um with 85% QE at Ha - first will be faster if you consider using them with the same scope because

    3.8 * 3.8 * 50%  = 7.22

    2.4 * 2.4 * 85% = 4.896

    7.22 > 4.896

    If you have different QE curves for camera and you shoot broad band targets - well things get rather complex. You need to integrate spectrum of the target x QE curve and multiply that with pixel size for both cameras. That is really out of the scope for most people (no one really knows spectrum of the target until it is measured - and there often is different spectra for all different features of the target) - so people stick to peak QE - rationale being that QE curves are in general pretty much the same shape (just pay attention that for specific purposes some cameras can have edge - like those very sensitive in IR part of spectrum or similar).

    Hope this helps

     

    • Like 6
    • Thanks 1
  12. 3 minutes ago, Xilman said:

    A Newtonian can be, but rarely is, corrected for SA with lenses in the same way as its coma (absent in a spherical mirror) can be and usually is in fast systems.

    Well, people do have good opinions on telescopes like Maksutov Newtonian and Schmidt Newtonian. Granted, those are full aperture correctors and not sub aperture correctors, but can be quite fast systems as well- often F/4-F/5.

    • Like 2
  13. 2 minutes ago, Xilman said:

    Not without corrective elements, they don't.  My  0.4m Dilworth contains nothing but spherical surfaces. A train of spherical lenses corrects for spherical aberration. Not sure of the focal ratio of the primary but guessing from the length of the OTA it is probably around f/2.5.  Take a look at http://www.astropalma.com/equipment.html to see what I mean.

    For newtonian with single curved mirror - it is true that spherical aberration increases rapidly with faster optics.

    https://www.telescope-optics.net/reflecting.htm

    image.png.9a1f467c04d264064c5fc3e0c97981bd.png

    For paraboloid - it's equal to 0 but for spherical with K=0, we can see that it is inverse of third power of F/ratio, so telescope has to be really slow, or have small diameter.

    • Like 1
  14. 10 minutes ago, TiffsAndAstro said:

    Big aperture shorter tube length would suggest not spherical/catadioptric? Or the opposite? Or neither

    Catadioptric has refractive glass element in optical train next to mirrors - usually as front aperture corrector - making tube closed. Sometimes this element sits inside focuser or in front of secondary - but then you'd have very short tube (Bird-Jones design) or Ruten Maksutov type (but you have newtonian, so it's not that).

    If it's a longer tube - greater chance that it's spherical mirror. Fast telescopes and spherical mirrors don't get along. If it's F/8 or slower - then it might be spherical.

     

    • Like 1
  15. 34 minutes ago, Ags said:

    One thing that puzzles me is that the gotos require so much correction from plate solving. I am almost perfectly polar aligned, so why doesn’t the goto get close. With a regular go to, you expect the target to be somewhere in the field of a wide eyepiece, but the HEM15 and AIR  are not even close. Plate solving papers over the errors, but it is puzzling. 

    How fast is your slew to position?

    Looking at the specs - stepper motors could be quite a bit under powered. This can result in skipped steps at high slew speeds. Steppers loose their torque with higher RPMs and since this mount does not use counterweights - it instead relies on stepper motors to provide torque to keep the weight in position.

    From specs - max amperage when slewing is 0.8A - that is very low, we can say that it uses probably 0.3A per motor (some of current for electronics and some for motors - two of them).

    Anyway, it slews at max 6 degrees / second - with reduction ratio of 360:1.

    That makes one whole revolution in 60 seconds - or 1 rpm, so motor needs to spin at 360 rpm.

    Here is example of 0.4A Nema 17 motor showing dependence of torque to RPM - at 360rpm pull out torque is 1/10th that of slow speed (motor is rated at 26N/cm). This is with 24V and 1600 micro steps (or 1/8th micro stepping) - things get only worse with finer micro stepping and when using lower voltage to power motors - like 12V

    image.thumb.png.428fec3b553b0ec55528e130a7e87308.png

    Anyway - try setting slew speed to 1-2 degrees / second instead to see if that helps with goto accuracy.

    • Like 1
  16. 5 minutes ago, Xilman said:

    If one star is a white dwarf, it could be moving towards, away from, or be stationary relative to the Earth.

    A strong gravitational field also creates a red shift which may be larger than the Doppler-induced one.

     

    I always wondered how large this effect is - particularly on galactic scales.

    For example - when computing Hubble's law - do we have to take into account relative difference between galaxy masses? Origin galaxy and MW?

    When light leaves origin galaxy it will be red shifted, but then when it "falls into" MW it will be blue shifted - difference between those two will be some percent of total red shift - but how large is the effect?

  17. 1 minute ago, mikeDnight said:

    Unfortunately Vlaiv, my tablet won't allow me to open your link, which I'm sure would be highly informative as usual. 😆

    I was aiming more on funny side :D rather than informative, but hey ....

    (I edited the post and inserted the actual image instead of hot linking it ...)

    • Thanks 1
  18. 35 minutes ago, mikeDnight said:

     I've never been so insulted - middle class nerd indeed!  I'll have you know I'm a working class nerd and proud of it. 😉 

    Just to be sure on terminology there ...

    image.png.18b01469230f4b920f7a1a182e0ae430.png

    • Haha 14
  19. 17 minutes ago, ollypenrice said:

    I agree that redshift/blueshift don't need acceleration (as Jim also points out) but a body's proper motion does, surely, need to originate with an acceleration? What I'm getting at is that the OP's question is about movement since he asks in which direction the stars are moving. Is there not a fundamental difference between a body 'moving' because of the expansion of space and 'moving' because it has, at some point, been accelerated?  If not, why do cosmologists distinguish between Doppler and Cosmological redshift?

    Olly

    I'm not sure that OP is concerned about origin of motion, but rather the fact that there is no preferred direction.

    If you look at this post:

    7 hours ago, Space Traveller said:

    Consider this, Star B is moving to the right at a speed of 6 units.  The earth is moving to the right at a speed of 4 units and Star A is also moving to the right at a speed of 2 units, thus the stars show a Red-Shift.

    However when we observe the stars, we are studying them from what I would describe as 'Thumb-Tack Earth', and there is nothing we can do about it.

    I believe it shows what is the heart of the question. We tend to study stars from earth and study their motion relative to us.

    Someone else might see different motion - after all motion is relative, and their Doppler shift would be in agreement with what they observe - as movement is relative.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.