Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

ONIKKINEN

Members
  • Posts

    2,414
  • Joined

  • Last visited

  • Days Won

    5

Posts posted by ONIKKINEN

  1. 54 minutes ago, Ricky Graham said:

    i just ordered a rail kit from DFO, then i saw this thread? They took my money so im hoping they have the kits again?

    Let me just say that i have no personal gain or beef with the company, but.

    You might want to search Dark Frame Optics on this forum and online in general and form your own opinion. It seems that any topic naming this company is like Plutonium, untouchable, and locked away forever. Hope you see your money or the product one day.

  2. I use it to finish off my images by applying denoise and sharpening, it works great and couldn't be simpler to use. You do have to be careful not to overdo it, if you crank all the dials all the way to the right you get a painting basically.

    I think there is a free trial you can use to play around with it and see if you like it or not.

    • Thanks 1
  3. I have roughly 0.9 arcsec/pixel resolution with my 8 inch newtonian and i find that my skies are really never (so far literally not a single time) good enough to warrant this. Binning 2x2 to 1.8 as/pixel is much more appropriate, if not ideal. I use DSS, SIRIL and photoshop for processing.

    So at what point does this make most sense, processing wise? I capture all my images unbinned so that i don't lose the option to use the higher resolution data if the conditions allowed that for that night, but as far as i have understood binning a CMOS camera is the same in capture or post as the read noise is read on each pixel anyway, unlike CCD. Should i bin the stack before or after stretching to get the best results or does it matter at all? Right now i mostly finish processing and the final step might be a resize and JPEG save.

  4. The mirror in my OOUK 8 inch is fixed to the support triangle with some sort of strong and "plasticy" loosely applied tape around the entire mirror. The mirror does not move on its own because of this but i can move it if i try, so no pinching anywhere. I dont see adding tape in there somewhere creating pinching unless you shim it way too tight.

    Consider buying strong springs for your collimation nobs, that way you wont need the locking screws. My mirror cell only has 3 springloaded thumbscrews in it and no locking screws, holds collimation well as long as the springs are tight enough. Might not work on larger mirrors though, but worth a try at least.

    • Thanks 1
  5. +1 for NINA and also Sharpcap pro just for polar alignment and the occasional lunar/planetary videos. Sharpcap pro polar alignment is impossible to do wrong and will take a couple of minutes at most, its really good. I used to only polarscope align and after switching to Sharpcap i noticed that i was always somewhere around 1-6 arcminutes off. With Sharpcap i get under 20 arcsec alignment every time with ease.

     

    Cuiv the lazy geek in YouTube will also teach you everything you wanted or didn't want to know about NINA, have a look at these if you want to: https://www.youtube.com/playlist?list=PLDesYsLqfxSEoqrmI7v8apj_eXQ-tkuXG . After watching these videos i really just started using NINA with no real questions left, smooth sailing since the first time.

     

    • Like 4
  6. 11 minutes ago, shropshire lad said:

    Skywatcher AZ-GTi and wedge ..... running SynScan Pro to guide it.

    Now this is what I was thinking, may be wrong ?!.

    Put Polaris in centre of live view, then using the PolarisView position in above App image as an example .... adjust the mount so that the star would move left and down a touch ( approx the diam of the full moon) and the true celestial pole would now be in the centre where my imaginary red cross would be.

    859318341_Poladj.jpg.6afb228755dd63c2107040d61b060a01.jpg

    Technically yes, but in reality would not be very accurate. You will have significant cone error (camera not perfectly in line with RA-axis) in pretty much every setup. There is no easy way to polar align without either a polarscope or the use of polar alignment software like Sharpcap pro. Sharpcap pro and others need a dedicated astro cam to work though, don't know what camera you're working with.

     

    You can try to put Polaris in the position as noted in the app, but you will have no real way to tell how far it is in your camera screen, or if its supposed to be inverted or not (you need to experiment on this). Polar scopes work well because every polarscope has etched lines in the eyepiece to show where the circle of Polaris's travel is, which looks exactly like the polar alignment app you use.

     

    I also found this: https://www.cloudynights.com/articles/cat/articles/darv-drift-alignment-by-robert-vice-r2760. This method looks time consuming and annoying but should work if you spend some time on this.

  7. My win10 mini-pc has never been connected to the internet and i control it with a tablet via remote desktop. Should work with a laptop too, as long as you have some way to remote desktop to the mini-pc. I am not familiar with team viewer or VNC but i think you'll figure this one out, there are many ways to remote desktop to a win10 client.

    What i use is a mini router like this: https://www.amazon.de/-/en/GL-iNet-GL-MT300N-V2-300Mbit-repeater-extender/dp/B073TSK26W/ref=sr_1_5?dchild=1&keywords=mini+router&qid=1631214262&sr=8-5

    This thing creates its own wifi that you can connect to anywhere you are, regardless of internet availability in either the mini-pc or the device used to connect to the mini-pc. It needs a USB connection for power and an ethernet port on the mini-pc to create the network. Range is not great with these but in a dark site trip i doubt you'll be taking the laptop on a hike away from the telescope, as long as youre within rock throwing range it will work.

  8. 1 hour ago, symmetal said:

     I don't know if the blue centre is actually from the galaxy itself or the star which happens to be in line with it. Your image has captured the two darker regions around the centre well.

    From Wikipedia: This galaxy has a morphological classification of pec dE5, indicating a dwarf elliptical galaxy with a flattening of 50%. It is designated peculiar (pec) due to patches of dust and young blue stars near its center.

    Huh, i actually always thought seeing blue in M110 was just artistic vision from processing. I always thought that something went wrong in colour balancing when i saw blue from an elliptical galaxy. You learn something new every day i guess.

    • Like 2
  9. Id like to think that I'm an astronomer's astronomer and a butterfly astronomer. I do want to learn something from the target that i image by looking at the image i took so the target is always the priority, not so much how pretty the target is. Of course if its both then that ticks all the boxes for me. Galaxies are my thing and you can always tell something about a galaxy as long as its not a tiny pile of pixels in the corner.

    But the real choice is missing from the list: Weather forecast enthusiast.

    • Like 2
  10. 14 minutes ago, The Lazy Astronomer said:

    Star count is basically the only criteria I use for sub rejection (or at least further inspection of the sub). Really quick and simple to analyse in DSS and sort by number of stars.

    I had a problem with DSS reporting bogus numbers with star counts. It had decided that a frame which was obviously failed to my eyes had more than a hundred stars, hence the additional metrics that i also use.

    In this case i was close to the meridian and my RA motor was slipping every couple of seconds because of an under performing mount, leading to bunched up star trails. Basically the brightest 10 or so stars trailed intermittently to look like a 100 stars in a tight line to DSS.

  11. Did you use kappa-sigma clipping? I had a case where DSS decided to stack all of my frames to a single frame that was nothing but clouds and startrails, the resulting image came up pretty much empty. Turns out kappa-sigma clipping just rejected almost all pixels.

    Try using the average setting in the lights tab to see if something funny is going on.

  12. The app means your polar scope, not the actual finder scope attached to your telescope. You need to polar align so that polaris is in the same position as the app shows. To do this you might need to first rotate your RA axis so that 6 points down and 12 points up as many polar scopes are not that well aligned.

    Basically just try to adjust the alt/az knobs until you see the exact same thing as the app shows.

  13. 16 minutes ago, The Lazy Astronomer said:

    Would be quicker to analyse them with your stacker of choice and delete the bad ones though. 

    Ditto on this one.

    Going through FITS files one by one and manually inspecting them is very time consuming and not always really possible, for instance when your lights are just barely over the read noise level and show almost nothing but black. You can throw these into DSS and inspect them this way, but its still far from optimal. Also not worth it when you have so many subs (119 i believe?).

    But NINA makes it easy to just analyze the images with a glance. What i do with NINA is have the guiding RMS error in arcseconds, star count and star HFR in the file name itself, you can set these in the options. If the RMS is too high, guiding was bad and i throw it out. If star count is too low but HFR is good, probably clouds or something external like accidentally shining a light near the telescope, if HFR is bad then the focus was out. It takes a bit of getting used to from a DSLR user but its just something that i had to accept that i cant actually see the subs myself easily. If you must see the subs you can always view them in NINA as stretched and debayered images, but only before shutting down the session and NINA for that night.

    • Like 2
  14. 1 hour ago, nfotis said:

     

    I am a newbie in DSO imaging (I know only planetary imaging - a bit), but I suppose that in Bortle 8-9 areas like Athens some kind of filtering is required?

    It seems that every manufacturer offers its own gain/mode/whatever parameters (and every program has its own idea about acceptable parameter ranges, judging by the NINA thread and native vs ASCOM drivers). This can lead to a real mix-up, if we don't keep carefully notes of parameters etc.

    N.F.

     

    Its easy to think that to combat light pollution you would need a light pollution filter to block this, but hear me out on this little "thought experiment".

    What is light pollution? It is mostly street lights, house lights and other types of lighting that allows humans to see at night. Why is lighting a specific colour, usually around the yellow-green-white(if LED, which is also green because blue green and red mixed is white, hence the brightest part is in green that is in the middle) light. These wavelengths are chosen because they are where the human eye is the most responsive. So in this way the least amount of energy could be used to provide the greatest amount of usable lighting at night (for humans).

    Now why is the human eye evolved this way? Well as it turns out there is a natural light source, pretty much the only one, that is the Sun. The sun is at its greatest somewhere around the whitish-yellow parts of the spectrum, also where the light pollution filters are blocking most of their light. As a mostly coincidence most galaxies are brightest somewhere around sunlight in colour. This changes a bit, actively star forming galaxies (like triangulum!) are noticeably bluer than the sun, while galaxies with pretty much no active star formation like ellipticals (M87, M32, M110) are mostly blobs of redder than the sun stars. This is because blue stars can only live for a blink of an eye in cosmic terms, so if you see a blue star it must have been born recently, as it will also die very soon. Older galaxies with no active star formation only contain the older smaller stars that are still happily fusing hydrogen for billons or trillions of years. Anyway, point being the average colour of galaxies is somewhere pretty close to sunlight (a coincidence).

    So, light pollution filters are essentially "galaxy light" filters also since light pollution and galaxy light is very similar.

    The reason why this is not a problem (mostly) is because light pollution is not a sheet of cloth over your telescope that physically blocks light, it is an added colour. And colour is something that is easy to balance out in processing, especially if you have a camera like the IMX571 that has an almost unbelievable colour response with very short exposure times. I should add that it is NOT possible to properly colour balance a shot taken through a light pollution filter, as a big portion of the spectrum is missing.

    Here you can see a measured spectrum of M33, a very blue galaxy that is one of the least effected galaxies for light pollution filters. The light pollution blocking bands are mostly between H beta and H alpha, which is most of the spectrum (but not the peak, because M33 is bluer than average).

    The-optical-spectrum-of-the-M33-nucleus-

     

    Edit: Also i might add that it might be possible for local light pollution to be greater than the capabilities of the camera to produce proper colours. In this case there is really no right answer, either take the hit of the LP filter or travel to better skies. But i have really never imaged from better than bortle 6 skies, and most of my imaging is from bortle 7-8 from last winter with an 11 year old DSLR that is nowhere near as good as the IMX571 chip and still it is possible to get proper colour balance without filters.

    Edit2: Of course if you intend to image non-broadband targets like nebulae you will greatly benefit from narrowband filters. Well with an OSC camera it would ideally be a duo-band filter like the L-extreme from optolong. Nebulae emit mostly only very specific wavelengths that are easy to isolate from the rest of the spectrum with these filters, and with this method you can gather good data from right under a streetlight, if you want to.

  15. On 01/09/2021 at 09:28, tomato said:

    I would do everything in APP. Calibrate and stack each panel first, then combine the stacked panels to create the mosaic. You need to experiment with the Multi Band Blending and Local Normalisation Correction settings to achieve the best seamless result.

    Take a look at this excellent guide by Sara Wager:

     https://www.astropixelprocessor.com/how-to-create-a-mosaic-in-easy-steps-by-sara-wager/

    I don’t know the spec of your processing PC, but the processes can take a while  on a lower spec machine.

    Thanks for the link, mostly helpful. But the example in the tutorial has 29 subs and doesn't do star analysis or registration, which are the phases that took my 250 subs around 4 hours. I just found it weird that APP hangs on to the process for so long, i had to leave the PC running as i went to sleep because it had already taken at least 6 hours. Wondering if i did something wrong? Mostly just followed the recommendations in APP. My PC is not afraid of a slight breeze, its an overclocked 6700K with decent DDR4 RAM, if that means anything to you, so weird that it took so long.

    Anyway as a proof of concept i combined 5 panels worth of data into one and got it working fairly well. There are seams but they are far less obvious than i expected. Also the middle parts are missing the blue halo of young hot stars as it has the least data. I think i got the hang of it now, just need to set aside a full processing day to get this done whenever i return to a mosaic project.

    M31-mosaicJ2.thumb.jpg.b7001cee80f52df8a63eeda6db84160a.jpg

    • Like 1
  16. On 02/09/2021 at 23:04, nfotis said:

    It seems that the sensor is quite good. This camera looks a plain vanilla version compared to others (no tilt plate etc).

    If you are getting these results without LP or NB filters, I wonder how well this camera will work using filters.

    N.F.

     

     

    The sensor behaves as expected from the IMX571 so: very good.

    I don't think using light pollution filters in galaxies is a good idea, as galaxies are very bright in light pollution blocking spectrums. Quite honestly i have had no trouble with light pollution, its almost like its not there. Just colour balance and its gone, the 16bit ADC and high colour sensitivity will retain the data through some pretty nasty LP.

  17. You have nice detail in there, all the spiral arms are present.

    Looks like maybe you have clipped the whites in processing as the core is very bright, also colour balancing might have taken a hit in the process.

    Light pollution filters in general are not very helpful with galaxies as galaxies are the brightest around the same wavelengths as light pollution, also makes colour balance more difficult due to the missing colours.

    I found processing only in photoshop quite difficult at first, actually i still do if i only use PS. Its difficult to see whether you've clipped the data and you generally cant see what you're working with until you stretch the stack. If you want to try something else i can recommend SIRIL, a free astrophotography processing software. You can stack in DSS, colour balance and stretch in SIRIL and then do final touches in Photohop. SIRIL is easy to use (for an astro processing software) and makes clipping whites or blacks entirely optional. Also it has a photometric colour calibration tool in it, pulling the correct colour from the stars recognized in the picture itself, not something you have to balance yourself. Might not work with light pollution filters though, as they cut off significant portions of the spectrum.

    • Like 1
  18. 38 minutes ago, Skyline said:

    How long have you been using that RisingCam IMX571 and what do you think about it compared to say the likes of zwo?

    Just dipping my toes into the dedicated astro cam world with this being my first one, so cant compare it to first hand experiences other than a DSLR which i believe is so far out of a fair comparison that it doesn't even make sense.

    The camera performs extremely well and is a joy to work with. No obvious hiccups with N.I.N.A that come to mind. There is no amp glow or pattern noise of any kind and the cooler works well and fairly accurately. The cooler overshoots its target quite a lot at first but stabilizes in a few minutes and returns to the set value, its at the set value after polar alignment with sharpcap pro and the initial faff of setting everything up so no real downtime in use. Looking at pictures taken with this and a ZWO2600MC it would be impossible to tell the difference, as they share the same chip.

    From a mechanical standpoint it is a bit different from ZWO and QHY offerings, but then again it is 800-900e cheaper. Youll need to buy adapters as the camera comes with just a few nosepieces, a UV/IR filter and a tiltplate if your model has sensor tilt. Even with these its still in a category of its own with pricing. Glowing recommendation from me!

    • Like 2
  19. 5 minutes ago, Astroscot2 said:

    A lovely natural looking image,  can I ask what field flattener you used with the newtonian

    Thanks, exactly what i want from my galaxy shots!

    Its a TS-optics 0.95x Maxfield comacorrector. Its not so apparent at this 7.52 micron binned pixelsize but it leaves a bit of coma in the edges. Not a pixel peepers choice for sure.

    • Like 1
  20. 1hr43min of 30s subs

    M33-1h43min3J.thumb.jpg.9cebcbcaa3ef8f1a48d93707d2866ab4.jpg

    Taken with a OOUK VX8 and RisingCam IMX571 mounted on a Skywatcher EQM-35 PRO from bortle 6-7 on the night between 1-2.9 during a partial Moon in the sky. The Moon didn't end up bothering all that much other than an annoying extra gradient to get rid of. Guiding was mostly on the worse side of 1 arcsecond RMS, hence the 50% resize and slight crop.

    Processed in DeepSkyStacker - SIRIL - Photoshop.

    I find it interesting that a OSC camera picks up quite strong H-alpha signal from the brightest clusters with such short exposures. Also seeing individual stars in another galaxy just seems so strange to me somehow, i always expect a galaxy to just be a uniform mess from so far away.

    • Like 28
  21. On 29/08/2021 at 00:07, tomato said:

    It’s very tempting to try and fit extended objects like M31 in the minimum of panels but the downside is you sometimes have to orientate the object to some odd angles which can detract from the finished image.

    Here are two examples, using the same telescope and camera, a 6 panel mosaic which gave a ‘widescreen’ M31, and a 12 panel version which put the galaxy in the more traditional diagonal orientation.

    The big advantage was the 6 panel version was captured as 6x1 hr panels in a single clear night in October, the 12 panel version took close on 3 months to complete.

    0EB1B8AF-14F0-4AE8-A992-74E89866E485.thumb.png.4da5eef9f02b162aa054cce13e703524.pngF451F228-409B-4A89-BC36-F58564BAD681.thumb.jpeg.2ddd89d449ece8c5ffc7e98524eaad62.jpeg

    Astro Pixel Processor will do a decent job of combining the panels, especially if they are taken under similar conditions, this was used on the 6 panel version. However, removing joins and gradients taken over many sessions can be a real challenge, by far the best results I have obtained have been with the Photometric Mosaic script available in Pixinsight. 
     

    Best of luck with your M31 project, I will be attempting another mosaic on this iconic target shortly (or when the weather actually permits).

    I have just tried combining the stacks from DSS to do the mosaic in APP. There are obvious lines between the stacks. Should i do all of the process in APP? I tried but it had already taken an hour of registration and it was nowhere near done so i just canceled it, is this normal for APP? For comparison i dont think the process took more than 20 minutes from unloading my memory card to having all 4 panels stacked in DSS.

  22. On 30/08/2021 at 19:53, ONIKKINEN said:

    Edit: The mess in the top left corner is caused by my focuser sagging under the newly increased weight of the imaging train compared to a DSLR. Well maybe not so much the weight but the "lever" effect from having the weight be further away from the focuser than a DSLR that sits right on it.

    Actually, upon further investigation it looks like also a fair bit of sensor tilt. I took apart my focuser, tightened things down a bit, reoriented it to 90 degrees so that the stronger up-down axis is vertical in typical operation towards the zenith-ish and it helped a bit, but not entirely.

    I will have to look for a tilt-plate to add somewhere in the imaging train to fix it, in the meanwhile i will have to cut a good 1/3 or so of the frame to hide the uneven field.

     

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.