Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

vlaiv

Members
  • Posts

    13,107
  • Joined

  • Last visited

  • Days Won

    12

Posts posted by vlaiv

  1. Well, my best effort in handling this data (for now red channel only) involves binning x2 in software to increase SNR - then we get good outline of nebulosity:

    image.png.c77ed7d91cdf212fa520f6e557b173aa.png

    My "sophisticated" stacking algorithm does not yet support sigma clip - so some satellite trails are visible. Final image will be quite small in size - this is due to fact that I debayer by splitting channels rather than interpolating, and then after I did bin x2 which brings image size down x4 compared to original sub size - above screen shot is 1:1 (or 100% zoom).

    Will try processing other channels as well and combining data to get resulting color image ...

  2. @Anthonyexmouth

    No wonder you have trouble stacking and processing your image. Data is rather poor. A lot of subs is just high altitude clouds and LP reflection of them.

    Here is a little gif that I made - it's red channel binned to small size and linearly stretched to show frame to frame difference. Some of the frames contain nothing more than LP glow and only brightest stars.

    Red-1.gif.f7e0f9277f4cfa000ad9d720ba55a37a.gif

    Stacking such data will produce poor results - both in terms of SNR and with significant strange gradients.

    I will try to get the best out of this data using my own "sophisticated" algorithm designed to handle such cases where there is significant SNR difference between subs, but I don't expect much from this data. I will need to remove at least couple of frames that are very poor.

  3. 2 minutes ago, Anthonyexmouth said:

    As a flat field generator is out of the question at the moment, motorised focuser is next on the shopping list. Would a flat field app on my tablet be a better way to take uniform flats? am i right in thinking i can lower the light to take slightly longer flats and then take the flat darks right after with the same settings?

    what would be a good number of frames to take?

    I have not worked with tablet/laptop screen so I can't comment on that. I think some people are using it successfully.

    You can keep short exposure times you have now. I have rather strong flat box and my flat exposures are also very short - just a few ms. I only once had a problem with flats and that was because of faulty power supply connector on my flat panel. Flat panel was flickering rapidly, probably due to sparking on less than perfect power connection, but it was not something that you could see with naked eye - light appeared uniform. Frequency of flickering was probably way too high for eye to see, but at such short exposure it was noticeable as banding in flats. Once I figured it out, it was easy fix with a bit of soldering.

    Best way to go about flats in your case is to set flat exposure manually (don't leave it on auto or whatever) so that all three histogram peaks get on right part of histogram if possible, just avoid clipping bright areas. Take as many flats as you can and then leave all settings as they are and do another run but with scope cover on. Simple as that. I advocate large number of all calibration subs - I tend to use 256 of each (have this thing about binary numbers - and that is 2^8 :D ).

    If you dither your subs, then you can use less calibration subs, you are already using pretty decent number - 50 of each.

    Btw, I'm just about to stack your subs using ImageJ to see what sort of result I will get in that workflow. Will post results.

  4. 3 minutes ago, Ronclarke said:

    That's what I thought! The mystery deepens!

    Thanks

    Ron

    What do files look like? Do they have cr2 extension? What is the size of them? Can you open them in another application and check stats - like image size and such?

    Canon has some free utility for image enhancement - maybe use it to see what the images look like?

  5. Just now, Ronclarke said:

    APT told me to use the Av setting instead of the M setting, that's the only change I'm aware of??

     

    Ron

    Not familiar with APT but I suspect that is because of the need for "automatic" flat exposure - it is Aperture priority mode, meaning that aperture does not change (nor it could since you have your camera on the scope instead of lens) so camera automatically determines needed exposure time. It should not affect the file/image size.

  6. Only reason that I can think of, except of course DSS having trouble reading those particular files, would be change of image size in camera settings?

    Did by any chance anyone use that camera during daytime? Maybe they set different image size / returned to jpeg capture or similar and you did not notice that?

  7. Out of curiosity I ran test for "best" amateur equipment, and I believe 100ly radius is a bit optimistic.

    In pristine mag22 skies right at zenith with exceptional transparency, with superb seeing FWHM 1.5, with 20" scope and probably non existent CCD (average QE 0.7, read noise 5e giving 1"/px on such focal length - meaning huge pixels), SNR of 2000 can be had in half an hour exposure (I guess that transits will be limited in time and you need at least couple of measurements to get the curve) down to mag14 source.

    • Like 1
  8. 1 hour ago, robin_astro said:

    Hi Vlaiv,

    A quick check suggests at 100 lyr a main sequence G star would be Vmag ~7.5  an M dwarf would be ~13.5. 

    https://sites.uni.edu/morgans/astro/course/Notes/section2/spectraltemps.html

    http://www.calctool.org/CALC/phys/astronomy/star_magnitude

    The magnitude is not so much the problem though. It should be possible to get the required SNR (2000 to detect 0.1% at 95% confidence) in 5 min total exposure for a mag 11 star and 200mm aperture

    http://spiff.rit.edu/richmond/signal.shtml

    The main problem is likely to be identifying transits against background systematic variations at the mmag level with the same timescales as the suspected transits, particularly where potential events extend over different observers.

    Cheers

    Robin

    Well, this certainly shows that one would be able to measure some of the stars. Here is my calculation from same sources:

    Magnitude range: up to 16.83 (M8 type star - absolute mag 14.4, distance 100ly), or roughly up to mag17

    With my setup, in my location, I would be able to achieve SNR of 2000 up to mag9 stars in 5 minutes - F/8 8" RC, mag18.5 skies, 0.5QE, 3.8um pixel binned x2 with read noise 3.4e (original 1.7e, software binning) - sampling rate ~1"/px, around 2" FWHM.

  9. Thing with ES68 16mm is that it has somewhat shorter eye relief - quoted at 11.6mm

    Not sure if that is important to you or not. Otherwise, it is very good EP (from my limited time with it, so I state that with some reservations).

    For better eye relief and similar AFOV in 15mm or 18mm, do have a look at APM UFF series - it is similarly priced to ES68 16mm one.

  10. 4 hours ago, MarsG76 said:

    I've have seen a image of the pillars taken by a fellow in Israel using a 16" RC scope during very dry conditions and obviously optimal seeing/atmospheric conditions, and his result was by far the best I have seen taken by a amateur... It was as close as I have ever seen comparing to the original hubble image..  I'll see if I can find it....

    That said, Rodd's image is very excellent and this is most likley at the level where the dryness of the atmosphere will play a bigger role than normally expected.

     

     

    I'm sure it has to be really good and resolved image as most things that affect image resolution could have been right:

    - enough aperture to provide resolving power

    - desert conditions - dry air will minimize scatter (no halos around stars) and seeing was probably very stable (it does tend to be so in flat deserts on particular nights)

    - mount used is probably top tier (needs to be high capacity to hold 16" RC scope) and guides very good.

  11. I've been very happy with my ES68 28mm. It might be a bit too much focal length for you but there are other focal lengths in that range.

    I also have 16mm of that line, but not have used it extensively, so can't give recommendation for that one yet. It's a bit tight on ER, which is not the case with above mentioned 28mm.

    On the list you mentioned, I could comment following:

    SLVs should be considered for motorized / tracking scopes and planetary. I would not recommend them for general observing as AFOV is too small. Although it is said to be 50 degrees it is closer to 45 degrees by numerous accounts.

    Baader Hyperion: this is something that I've read about - no direct experience, but you want to avoid these in faster scopes. They suffer astigmatism on fast scopes and outer part of the field is not quite usable.

    In general I think that for general wide-ish field observing you need two EPs - one wide field general overview EP - something like above mentioned ES68 28mm (which is 2" EP), so something in 25-30mm territory, and one shorter FL EP that will be your main DSO observing EP - this should be in 15-18mm range.

    In this range, there are several really good EPs, for example:

    - Baader Morpheus 17.5 is very very good EP (quite expensive)

    - APM UFF 18mm (65deg) is another very good EP (much more affordable)

  12. I would have thought that such project needed some sort of website to support it?

    Do you have such a website / web based service to aggregate data and coordinate observations?

    That way you can possibly attract much more "short term" observations and still cover full time period and all the stars. There will be much more people willing to dedicate an hour or two a week of their observing time for this project than those that are willing to participate "full time".

    Not sure if recommending at least 16bit CCD sensor is good way to approach this. CMOS sensors are becoming more and more popular, and most of them have "only" 12 or 14 bits, but much lower read noise. I'm fairly certain that they can be used as effectively as 16bit CCDs.

    What sort of magnitude are potential star candidates? Registering drop of 0.1% in brightness will depend on target star brightness in the first place, and also time taken to transit. If you provide a range of values for that data, it would be rather easy to calculate needed exposure lengths (or stacks) depending on scope aperture and recording resolution (and need for binning) to get certain SNR needed to make a detection. Above website could facilitate online calculator for this where people wanting to participate could select target star (or be assigned one depending on their location and designated observing time, with coverage taken into account) and specify their equipment and it would provide directions for observation - needed exposure(s), any stacking, proper calibration and finally way to submit their data - either for further processing / analysis or people could download software to analyze their subs on their computers and just submit results.

    Anyway interesting project, would love to participate but I don't have permanent obsy and I'm in general lacking observation time these days due to shortage of free time.

    • Thanks 1
  13. QM is not that complicated at all, what is problematic to most people is math behind it - you need to understand the math in order to understand the QM.

    If you have math knowledge, then there is a very good set of lectures online by Leonard Susskind that cover many aspects of physics including much of QM. Lectures are based on his book and titled the same: Theoretical minimum

    There is a website with link to videos of lectures on youtube:

    https://theoreticalminimum.com/

    I find lectures very good although sometimes a bit slower paced than I would like (some concepts are explained multiple times because lectures are recorded in actual class and people have questions, etc ...).

    • Like 2
  14. 59 minutes ago, Anthonyexmouth said:

    cool, i'll get to work on the dark flats tonight. forecast is promising so i'll uncover the scope and maybe get some imaging done too. 

    Thing with flat darks or dark flats (I've seen both orders used :D ) is that they are so close to bias in some circumstances - when you have short flat exposures. It is not always the case - some sensors need longer flats and dimmer flat source like CCD sensors with mechanical shutter where you have to be careful not to capture motion of the shutter in single frame (flat exposure needs to be much longer than shutter time).

    In principle, you should always avoid bias frames with CMOS sensors - I think there is no CMOS sensor currently that has OK bias frames - every CMOS sensor that I've seen suffers from some sort of problem with bias frames. There fore it is best to think of flat darks (or dark flats) as being just that - darks and not bias - meaning they need to be taken at exact same settings as flats (gain, offset, exposure, temperature, ....) with only difference being covered scope.

    Wim explained the math behind it - all calibration is aimed at producing only photon response and removing all other signals related to electronics stuff (and shadows).

    Flat dark library is good thing to have provided you have "controlled" environment - same way as with regular darks (where you control temperature). This means that you need to have regulated light source for flats and always use that flat source for every filter (or in your case just sensor because it's OSC). If your method of getting flats is T-shirt over aperture and bright sky in sunset / sunrise, then I'm afraid you won't be able to use flat dark library - exposure length for flats will depend on how bright daylight is at time you take your flats and also how thick is T-shirt, and how many times you folded it over aperture of the scope.

    Best approach in that case is to take set of flat darks right after taking flats - just cover your scope after you've taken flats and do another set with same settings - exposure length also being the same. Flats / flat darks don't take much time to complete, you should be done with them in about 10 minutes or so (depends on download speed, exposure length and number of subs you take).

  15. 2 minutes ago, wornish said:

    Thanks my bad...    as usual🤐

    I use a mono ASI1600 so not come across this before.

    I guess its best I just follow the thread and not embarrass myself going forward. - Sorry to intervene.

     

     

    Why don't you give it a go anyway? APP should have option to debayer images taken with OSC sensor - after that it's pretty much processing as usual ...

    Here is a little thread how to work with OSC fits (APP does automatic job on DSLR raws but does not recognize OSC fits):

    https://www.astropixelprocessor.com/community/main-forum/how-to-debayer/

    • Thanks 1
  16. 3 minutes ago, wornish said:

    Been following this thread and just thought I would try your data in APP to see what happens.

    All the lights you shared are Grayscale,  I thought the camera was one shot colour?

    As they are mono images there is nothing to tell which channel they are, red, green or blue.  Not sure if I am missing something here.

     

     

    They are raw images with bayer matrix, you need to calibrate them and then debayer to get color out of them. Bayer matrix setting for this camera model is RGGB if I'm not mistaken.

  17. 20 minutes ago, wornish said:

    This discussion makes me realise I don't know enough about the interaction between all the variables when capturing a deep space images.  Focal Length, Pixel size, pixel scale, viewing, FWHM, diffraction limit etc.

    I understand the basics but can't get my head around how each factor interplays.

    Is there a post anywhere that explains all of this in relatively easy to understand terms?

    I don't think there is a single post, but there are a lot of posts that debate certain aspects / topics that you mention. I think it's been all covered, but not in the same place, it's scattered over the forums, mostly in Imaging section, so you'll have to do a bit of search, or maybe start a new thread that will put everything in one place?

  18. This info is usually available in software that you are using, so it depends on software for stacking / processing.

    For example, DSS will list this when registering subs (for each sub) in list of subs:

    image.png.b044a9df4b60172ee0fe85f0740694e7.png

    Note that DSS will give this values in "pixels" and not arc seconds. In above example, let's say that you are working with 0.5"/px, actual values will be:

    3.44"
    3.35"
    ...

    (so half of what is written, or FWHM in pixels * sampling rate)

    PI also has this functionality. I don't have a license for PI so can't say where that option is but I know that it's there and you can use FWHM values in sub selector for example to choose which subs you want to stack.

    AstroImageJ will also provide this info, here is one of my Ha subs:

    image.png.91ebed18844eb59f4a904b23489ef54d.png

    It says that FWHM is 3.19 and this image is sampled around 0.96"/px, so true FWHM value is about 3". Optimum sampling rate in this case would be 3/1.6 = 1.875, so I'm oversampling at 0.96"/px. However this is based on single star, you want some sort of average FWHM on your sub, or even better average across subs to get idea what your sampling rate should be.

  19. It's no good for long exposure / deep sky astrophotography. It is quite ok for planetary imaging.

    These two (DSO and planetary AP) are quite different in gear requirements.

    With deep sky AP you want following:

    - very good mount that means stability / carry capacity / good and smooth tracking and precision. Also ability to be guided well (unless you have encoders but that is much more money than this whole setup)

    - suitable telescope with suitable focal length and good corrected field. Good focuser that will hold your camera without any sort of movement / tilting

    EQ3 mount is very basic mount. It is light weight and not stable platform for such imaging, tracking is adequate for observing but not for AP with telescope. It is good for mounting only camera and short focal length lens on it and doing wide field AP. 150P telescope is again not the best option for DSO AP - it is long focal length which requires precision. It's got basic 1.25" focuser that won't be able to properly hold your camera, ....

    In principle, one can do DSO AP with such setup, but it takes a lot of experience to do so. It is by no means something that would be recommended to beginner.

    Planetary ap is quite different, and I would say it's quite ok, and even very good combination for that. It does not need perfect tracking, scope is well suited because field of view when capturing planets is very small. Only thing that I would recommend for planetary AP is skipping DSLR type camera and getting dedicated planetary camera for this role. In that combination it can provide very good results.

  20. 8 hours ago, MarsG76 said:

    Excellent result.. I find that the pillars were quite small at 2000mm too.. so I'd estimate that 6000-8000mm focal length would be needed to get good detail and a good size of the pillars....

    Image Rodd captured is about as good as it gets with amateur setups. Additional focal length is not going to help here as you'll just capture blurred image.

    It is very, very hard for amateur setups to get any sort of additional detail below 1"/px. Pillars are some 4'40" high and 2'10" wide. If you sample at 1"/px you'll get their width at about 130px.

    image.png.7a1b4f2237f59c7a46b473615aa0a2f5.png

    Rodd's image is already sampled at somewhere around 0.78"/px (estimate based on quick measurement of distance of two bright top stars), so this is pretty much biggest meaningful rendition of pillars on amateur setups.

  21. 16 minutes ago, MilwaukeeLion said:

    No this is good info because im bouncing images between 3 programs, (PI - Nebulocity - Straton star removal). Easy to recombine starless image with stars only or combine 2 final results using stars or non stellar align in Neb. All handle fits but result out of Stratton is reduced by half size. From say 35 mb fits going in to 17 mb fits coming out.  Am I losing valuable info using Straton as opposed to taking the long route and removing stars manually?

    Depends where you are at in processing pipeline. I'm not familiar with Straton, but recently have had a look at StarNet++ - it requires already stretched images.

    Point with 32bit precision is too keep all the data in required precision up to point of stretching. After you have stretched your data, then you can go with lower precision - in the end, image ends up as being 8bit per channel simply because displays work with that kind of precision (and human eye can distinguish about half of that about ten million as opposed to 21 million of three channels of 8bit data, and there are some other technicalities like smaller gamut of displays vs human vision, but that is separate topic).

    If you use Straton on stretched image then you don't have to worry that much about 16 bit format, but if not, I would suggest using different approach:

    - make a copy of the image you want to work with

    - convert it to 16 bit yourself and keep one copy of it

    - open it in Straton and do star removal save as "clear" 16 bit copy.

    - Use two above 16 bit images to create star mask and use original 32bit image to "paste" over only 16 bit data where stars once were in starless image - leaving much of image without stars unaltered. Then proceed to process / stretch such image and then in the end paste stars over finished image.

    This way you will keep most important bits in high precision. You have to be careful though and it will probably require some pixel math to match 16bit data back to 32bit data. Best way to do it is to set your 32bit data to 0-1 range. When you convert it to 16 bit format it should be in 0-65535 value range (unsigned 16 bit). Once you have two images from Straton - one 16bit regular that you kept a copy of and one 16bit result of star removal, load them back into processing software change to 32bit and divide with 65535 to return to 0-1 range. Subtract two images and you should be left with star field only - use selection tool to select background, then invert selection to select only stars, then apply that selection to now 32bit version of starless image, do copy and paste over original 32bit version of image (copy of the first step).

    Hope this makes sense?

    • Like 1
  22. I would say FITS as it is standard astro interchange format - so most software should handle it properly.

    Bit depth depends on "stage" and camera used (format of capture). Initial subs should be 16bit (unless for some reason you opted for 8bit format - but then better be sure you know what you are doing), and every stage after that should be at least 32bit.

    I say at least 32bit because most software does not support double precision, but in due time we will see more and more software starting to support it if CMOS sensors, and particularly low read noise CMOS sensors continue hitting the market - then it will be feasible to have thousands of subs and that many subs require higher precision than can be provided by 32bit float point.

    32bit integer will actually store greater precision than 32bit float point, but not many software supports this type of format, and one needs to be careful when coding support for it - most operations will require usage of long integers (64 bits) and scale back results to 32bit for storage. Ok, this is maybe a bit more info than needed :D use 32bit float precision for everything except captured subs and you will be fine for the time being.

    • Like 1
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.