Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

Herschel Prism


Kenza

Recommended Posts

Hi! I would like to use my scope to observe and photograph the sun. As I get it, there are at least three ways to do it: Using a solar filter in front, a Herschel prism at the back or buying a Scope specialized for Solar viewing. I am interested in the herschel prism and have found one: http://www.teleskop-express.de/shop/product_info.php/info/p3747_Baader-2--Cool-Ceramic-Safety-Herschel-Prism---photographic.html

Would this do the job or do I need something extra?

Link to comment
Share on other sites

  • Replies 34
  • Created
  • Last Reply

That would do the job OK , a little pricey for my taste though .

I went for the "Scopium" wedge here .... http://www.365astronomy.com/scopium-white-light-herschel-wedge-solar-wedge-with-integrated-nd30-filter-red-radiator-p-3535.html

Works a treat for the money ...  :smiley:

I also use the Baader Solar film and this might be the better way to start off , much cheaper than any wedge and gives great results.

( my offerings from today with both ... http://stargazerslounge.com/topic/204340-sun-in-white-light-7-1-2014-1215-gmt/ )

Link to comment
Share on other sites

I use the Lunt Herschel wedge (1.25", not 2"). Excellent piece of kit and much cheaper than the (also excellent) Baader. A 2" wedge is not needed for seeing or imaging the entire disk unless the scope has a HUGE focal length. Note that these wedges ONLY work for refractors, NOT reflectors or catadioptric scopes

Link to comment
Share on other sites

Is this for an Equinox 120 (that will be a very nice combo!)? Which camera will you be using?

I had to buy some extra adapters to be able to get prime focus with my Equinox 120, Baader Herschel Wedge and my webcam (the light path was otherwise too long unless I used a Barlow to push out the focus):

Baader item no.:1508035 (#27) (2inch male to T2 male)

T2-Cs from Teleskop Service

Link to comment
Share on other sites

Obviously the black Scopium wedge with the red filter is less popular than the others :)

Suddenly I'm having thoughts about putting one on the back of my ST120 with a mono camera and perhaps a red filter to keep CA under control.  Not long until my birthday :D

James

Link to comment
Share on other sites

Is this for an Equinox 120 (that will be a very nice combo!)? Which camera will you be using?

I had to buy some extra adapters to be able to get prime focus with my Equinox 120, Baader Herschel Wedge and my webcam (the light path was otherwise too long unless I used a Barlow to push out the focus):

Baader item no.:1508035 (#27) (2inch male to T2 male)

T2-Cs from Teleskop Service

Yeah, it is for an Equinox 120. I have a Canon 550D and an Atik 383 or 4000 is coming in 2-3 months.

Link to comment
Share on other sites

With this Baader Herschel Wedge I get neutral density filters and a contrast filter. But what about an H-alpha filter? Can I use for example an Astronomik H-alpha (for deep sky imaging) or do I need a different one for solar imaging?

Link to comment
Share on other sites

The Herschel Wedge will only do white light, for H Alpha you need a specialist solar scope with much narrower bandpass than an H alpha visual filter. PST or similar as a starter

Stu

Link to comment
Share on other sites

Ah, I am not sure if you will be able to get prime focus or not with a DSLR and the Baader Herschel Wedge without adapters. Maybe TeleskopService will know/can advise which adapters if any are needed?

Yeah, it is for an Equinox 120. I have a Canon 550D and an Atik 383 or 4000 is coming in 2-3 months.

Link to comment
Share on other sites

For the equinox 120 a 1.25" Herschel wedge is more than big enough. Even my C8 with 2000mm focal length allows full-disc images of the sun with a 1.25" diagonal (NOT wedge: I use a Thousand Oaks glass filter in front of the C8). APM sell sets consisting of a Herschel wedge (Lunt) with ND3 filter. Add a 1.25" solar continuum and polarising filter and you are still a great deal cheaper than the Baader.

Link to comment
Share on other sites

Of the white light solutions a good Herschel Wedge pulls out far more surface granulation than a solar film over the objective. The detail looks sharper too. It's a better view. I've tried Lunt and Baader. Both good.

I'm not at all sure your CCD cameras would work with it, though. There is a minimum exposure time on the 383 imposed by the wipe of the shutter. I suspect this would be far too long. I'm not even sure that the 4000 at 1000/th second would be short enough. I'm afraid I just can't remember, but I seem to think we tried my 4000 in a guest's Hershel wedge (I don't have one of my own) and even with a moon filter we couldn't slow it down enough. Maybe if you put a green filter in the line you might get away with it. Maybe someone here will know.

Olly

Link to comment
Share on other sites

I have the Lunt wedge and Baader Solar Continuum filter, seems to work very well.  I currently use a DMK41, but might try my Canon in the next few days.  I use an ST-80 (when traveling) and a GT102 at home.  Your 120 should work very nicely with your DSLR and I can recommend the Lunt wedge if you can get one at a good price.

Hopeful of a capture with the GT102 in the next hour and will post in this forum.

Robin

Link to comment
Share on other sites

I have the Lunt wedge and Baader Solar Continuum filter, seems to work very well.  I currently use a DMK41, but might try my Canon in the next few days.  I use an ST-80 (when traveling) and a GT102 at home.  Your 120 should work very nicely with your DSLR and I can recommend the Lunt wedge if you can get one at a good price.

I find myself wondering...

With a colour camera, the intensity of light hitting an R, G or B pixel from a white light solar image should be broadly similar across all three colour components otherwise we'd presumably get colour in the image.  It might therefore be regarded that the RAW image is not a mosaicked/bayer colour image, but an accurate mono image.  Debayering the RAW into a standard colour frame could perhaps be viewed as adding no colour information, but actually reducing the resolution of the captured data by "blurring" adjacent pixels into each other when there is no need to do so.  It might be interesting to see the difference between images created via the normal debayering process and by taking the RAW file and just treating it directly as a mono image (assuming there's some way to do that).

James

Link to comment
Share on other sites

I have a 1.25 inch Lunt and 2 inch Baader Herschel wedge, used with my Onyx ED80 and Equinox 120. For me:

The Lunt 1.25 is better value, giving almost as good a view as the Baader for much less money. I love using the Lunt with my grab-and-go setup. It's light and compact, fitting easily in my ED80's case and making for easy balancing on my mini giro mount. I can get prime focus imaging in both scopes without using adapters to shorten the light path. The built-in ND filter means no messing about swapping ND filters.

The Baader gives slightly better views than the Lunt. I can change the ND filter to keep the shutter speed fast when imaging, especially when using a Barlow. The Baader, being 2 inch, provides a very solid connection to the scope, so I like using it with my binoviewer. I have not used the binoviewer with the Lunt 1.25, as the wedge has slipped a few times with eyepieces/the webcam so I prefer to use the 2 inch. To get prime focus imaging with the Baader with either refractor, I have to use adapters to shorten the light path (I don't need to shorten the light path when using my 2.5x Powermate).

If you buy various bits of Baader kit, you can combine them using the adapters. e.g. I  use the same adapters used to shorten the light path for my webcam to shorten the light path for my Maxbright binoviewer used with the wedge.

It's a bit of a pain swapping over the ND filter with the Baader. After imaging I have to change the filter for visual, so I cover the scope safely (never forget!!), take the wedge off, unscrew the adapters I use to get prime focus with the webcam, change the ND filter, and put the orginal Clicklock element back on. Also, to use a Barlow, I have to again take off the short lightpath adapters, put the clicklock back on, and in addition unscrew the webcam and put the webcam's original nosepiece back on then put the webcam back in. It's not that much of a problem, though the more things you take on and off, the more opportunity there is for dust to get in. Dust tends to show up much more when using a Barlow. Sometimes I am lazy and use the Lunt instead for a quick look and keep the Baader set up for imaging with the ED120.

If I had to choose for my Equinox 120, I would go for the Baader. I feel the slightly better view will pay off, especially on close-ups under good conditions. If I had to choose for my ED80, I'd probably pick the Lunt (or similar) 1.25 for its convenience.

I have no idea whether a 1.25 inch wedge will cause vignetting with a DSLR, as my webcam has a much smaller sensor - hopefully the DSLR folks can clear that up?

Edit: I forgot to add that the solar finder on the back of the Baader is handy. With the Lunt, it's a bit more of a pain to find the Sun but not a major problem as you can use the shadow cast by the scope to try and line it up.
 

Link to comment
Share on other sites

James, I am not sure there will be any advantage in fiddling with the DSLR output, the light falling on the sensor is white light and provided you don't over-expose there should be no more colour bleed than when the camera is used normally.  After all most DSLRs are optimised for taking pictures in white light, as generated by our sun.

There is however, no real advantage is using a colour camera for solar, it is in grey scale anyway, but if that is all you have got then there is no real problem in using a DSLR.

Luke, 1.25" wedge with a DSLR, I would like to know that and will give it a try in the next week or so.

Robin

Link to comment
Share on other sites

James, I am not sure there will be any advantage in fiddling with the DSLR output, the light falling on the sensor is white light and provided you don't over-expose there should be no more colour bleed than when the camera is used normally.

I meant that the "bleed" will happen when converting from RAW, doing the transform that combines the colours rather than as a result of the image capture process.  There are lots of ways of doing the transform from RAW to RGB, but they all involve combining colours from multiple pixels in the RAW image to provide the full RGB triplet for each.  Because the image is effectively monochrome to start with (ie. R, G and B are all the same intensity), going RAW -> colour TIFF -> mono will actually result in different pixel intensity values compared with directly interpreting the RAW image as mono.  It may not make much difference where the intensity varies fairly smooth across areas of pixels, but it may well result in blurring of boundaries where intensity transitions are more marked.

It's hard to explain better without drawing pictures to illustrate how the data might be combined.  I'll try to do that later.

James

Link to comment
Share on other sites

Right, here we go.  Please offer the opinion that you think I'm talking cobblers if I've missed something obvious :D

Lets say we're imaging the Sun in white light with a DSLR and we have an effectively monochrome image.  That is, the R, G and B components of the image would be the same.  There might be a feature that appears on the sensor like this:

mono-raw.png

But the camera has a colour filter, so what actually appears in the RAW file is more akin to this:

mosaic-raw.png

There are lots of algorithms for turning this back into a full RGB image.  A commonly-used one is "bilinear".  It's relatively quick, but not as good as others.  On the positive side it's easy to understand, so I've used it for this example of what the colour image might look like after converting to RGB:

demosaic.png

(I did that in my head, so there may be errors :)

Converting the colour image to monochrome gives a final result like this:

mono-demosaick.png

which is similar to, but not that similar to, what we actually could have obtained by taking the RAW data and interpreting it directly as mono (that is, the first image).

I admit the example I've chosen is extreme, partly because it makes a clearer example and partly because my head won't explode whilst trying to do the maths :)

It looks like it is possible to use dcraw to turn a RAW frame directly into mono, so I might well have a play with that perhaps some time when we have a rainy day and I can't get out imaging...

James

Link to comment
Share on other sites

In theory yes, but most cameras do one more thing, they combine four pixels (RGGB) to produce one colour pixel, but then don't step on to a new set of four pixels but only step on one pixel, so two of the pixels are used again with two new ones.  In this way the produce a colour picture with the same resolution as the resolution of the chip.  If they didn't do this they would produce an image 1/4 of the size.

It is a fiddle really, you have lost resolution and detail as the same pixel is used more than once, but it looks better than if they didn't do it.  That is why you don't get back to a nice block of white in your example as the true resolution is only 1/4 the pixel resolution and therefore if you line is one 2 pixels wide (as in you example) and you need a 2x2 matrix it is never going to go from black to white and back to black, you are trying to exceed the resolution of the imaging device.  A wider block of colour would show white in the middle pixels and grey or some false colour towards the edge.

If you start off with white light (equal amounts of RGB) then once they are combined you end up with white light.  By the way there are usually two green pixels since the CCD/Sensor is not as sensitive in the green.  There are always slight differences in the sensitivity and this is where the white balance comes in.  If it is auto, it scans the picture, looks at the RGB individually and balances them as it assumes a fairly even distribution.  If you picture is just of blue sky with very little else, then auto-balance will probably get it wrong.

Robin

Link to comment
Share on other sites

I meant that the "bleed" will happen when converting from RAW, doing the transform that combines the colours rather than as a result of the image capture process.  There are lots of ways of doing the transform from RAW to RGB, but they all involve combining colours from multiple pixels in the RAW image to provide the full RGB triplet for each.  Because the image is effectively monochrome to start with (ie. R, G and B are all the same intensity), going RAW -> colour TIFF -> mono will actually result in different pixel intensity values compared with directly interpreting the RAW image as mono.  It may not make much difference where the intensity varies fairly smooth across areas of pixels, but it may well result in blurring of boundaries where intensity transitions are more marked.

It's hard to explain better without drawing pictures to illustrate how the data might be combined.  I'll try to do that later.

James

Easy enough to try James, just uncheck the 'Debayer Raw Image Files' option in PIPP.  I think you would be very lucky to get away without a bayer pattern showing though, unless you were very careful with your white balance!

Chris

Link to comment
Share on other sites

James, won't you get the Bayer matrix boundaries showing if you follow your method? When you take a flat field in an OSC CCD camera (which you don't debayer while stacking) you end up with a greyscale flat with the grid visible. This then has to be debayered - blurred out - to make the grid lines disappear.

Olly

Link to comment
Share on other sites

In theory yes, but most cameras do one more thing, they combine four pixels (RGGB) to produce one colour pixel, but then don't step on to a new set of four pixels but only step on one pixel, so two of the pixels are used again with two new ones.  In this way the produce a colour picture with the same resolution as the resolution of the chip.  If they didn't do this they would produce an image 1/4 of the size.

Not exactly. They combine some number of pixels with "the one under consideration". Depending on the algorithm different numbers of pixels will be used. The bilinear algorithm for instance considers the colours of the eight pixels surrounding the one under consideration. That is exactly what I have done in the above example. More complex algorithms process even more, especially to provide better edge detection which is where many of the simpler processes fall down. I can't do those in my head though :)

That we talk about CFAs in terms of blocks of four pixels has nothing to do with the algorithms used to create an RGB image. It's just a handy label for the arrangement of the colour mask. I've looked up quite a few because I want to implement them and none that I've yet found use the block of four pixels to set the colour for reach one. The "nearest neighbour" algorithm is closest, but it produces fairly unpleasant results.

It is a fiddle really, you have lost resolution and detail as the same pixel is used more than once, but it looks better than if they didn't do it.  That is why you don't get back to a nice block of white in your example as the true resolution is only 1/4 the pixel resolution and therefore if you line is one 2 pixels wide (as in you example) and you need a 2x2 matrix it is never going to go from black to white and back to black, you are trying to exceed the resolution of the imaging device.  A wider block of colour would show white in the middle pixels and grey or some false colour towards the edge.

It is a fiddle, or at best an approximation, yes. In return for a colour image with the number of pixels on the sensor we're trading resolution.

What I'm suggesting here is that if we have no need of the colour data then we don't need to give up the resolution. And if we can assume that the R, G and B components are the same then we have no need of the colour data.

James

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.