Category Archives: Optical modeling

How to develop flexible optical constant models based on given n and k tables

Materials used in thin films may differ in their optical constants n and k from the corresponding bulk versions. Even if you find optical constant tables in literature for your application you may need to make some adjustments to the n and k values. This is certainly difficult if you have just a table of fixed values – what you need is a flexible model which can adjust the spectral shape of n and k by modifying a few key parameters only. This short tutorial shows how to proceed in such a case.

Crystalline silicon is used as example. The database of our SCOUT and CODE software packages contain several silicon versions. Here I have selected the item ‘Silicon (crystalline, MIR-VUV)’. These are the n and k data, featuring several interband transitions and a slow decay of k towards the indirect band gap around 1.1 eV:

If you change the temperature of a silicon wafer or deposit poly-crystalline silicon thin films you may observe similar structures but not exactly the same n and k – a flexible model would be nice to have. Our strategy to develop such a model will be this: First, we generate ‘measured data’ which are computed using the existing n and k table values. Then, in the second step, we setup a suitable model and adjust its parameters to reproduce the ‘measured’ data. If the ‘measurements’ provide enough information the n and k values of the model should be almost the same as those of the original table.

Since we will simulate the measurements we can easily work with a system which would be almost impossible to realize. In order to clearly ‘see’ the weak silicon absorption in the visible we use a 2 micron thick free standing layer – this layer is quite transparent and it will generate a nice interference pattern. To match the amplitude of the fringes will be difficult with wrong k values – so this setup ensures good n and k data of the model in the visible and NIR. The layer stack is this:

Using this layer stack we could now generate several reflectance and transmittance spectra, taken at various angles of incidence. This would make a nice set of ‘measured data’ for the fit procedure. However, to make our life as easy as possible we can use an ellipsometry object in the list of spectra. Objects of this type have the option to compute so-called ‘Pseudo n and k’ values. With this option the ellipsometric angles Psi and Delta are represented by pseudo n and k values of a virtual single interface. In spectral regions where our silicon layer is opaque the pseudo n and k values are the same as the real ones. In the transparent region the pseudo n and k values reflect the interference structures – this looks strange, and one could certainly question the usefulness of the concept ‘Pseudo n and k’. However, the interference pattern is what we go for to get good small k values in the model. The blue curves below show the generated pseudo n and k values. Within this example we will work in the spectral range 200 … 1000 nm:

Ellipsometry objects offer the local menu command ‘Actions / Use simulated data as measured data’ which is what we need now. The blue curves are now copied to the red measurement data as if we had imported measured data.

We can now replace the material object by our first model in the layer stack. I have started with a very simple model which consists of a constant real part of the refractive index and a Tauc-Lorentz model (inside a KKR susceptibility object). It is a good idea to generate a new item of type ‘Multiple spectra view’ which can show both the original table values of n and k and the model values in one graph (lower left side). With this simple model, you cannot expect a good match since  silicon shows several interband transitions in the range 200 … 1000 nm:

However, if we limit the range to 400 … 1000 nm the simple model works quite well:

For the full range (200 … 1000 nm) we need more interband transitions. Sticking to the Tauc-Lorentz oscillator type, I have added 2 more which brings up to a new level (still not satisfying, though):

Depending on your patience (and the needs of the final application) you can add more features as you like, approaching the former table values of n and k more and more. I stopped my example at this level:

The model developed for silicon this way can now be saved to the optical constant database for future use. You can use it for studies of systematic variations of deposition conditions and generate tables like ‘resonance frequency vs deposition temperature’.

You can follow the strategy shown in this tutorial for all kinds of materials, as long as you get good tabulated values of n and k to start with.

Polarizer for R and T measurements

We have added a polarizer to our desktop R and T measurement system. The same polarizer can be used for all systems recording R and T for shiny, non-scattering samples.

The desktop system records the 100% signal when detector and light source are positioned directly opposite to each other:

After this measurement you can do absolute reflectance measurements at arbitrary angles of incidence. We have generated tools to conveniently set fixed angles of incidence – here we have used 45° and 60°.

The graph below shows 4 absolute spectra of a silicon wafer (45°, 60°, s- and p-polarization). We could achieve almost perfect match of simulated spectra based on literature data of the optical constants, fitting a thickness of 6.2 nm of the native SiO2 layer on the wafer. Data acquisition for each spectrum took less than 1 second (averaging 18 spectra using an integration time of 50 ms).

Angle of incidence distribution

Spectroscopic experiments can never realize single angle of incidence but have to work with a (continuous) distribution of angles. Although in most cases the assumption of a single angle of incidence is a very good approximation there are cases where we need to take into account more details in producing simulated spectra. One example is taking reflectance spectra of small spots with a microscope objective, using a large cone of incident radiation.

For a long time our software packages can compute spectra averaged for a set of incidence angles, each one defined by the value of the angle and a weight. We have now implemented new features to simplify work in this field.

If you have prepared a list of angles of incidence you can now connect these angles to the angle of incidence that you have defined for spectrum object which owns the list of angles. You can check the option as shown here:

If you have activated this option you can use a set of angles with positive and negative values (centered around 0) as shown here:

If the angle of the parent object is 50° the computation of the spectrum is done for the 3 angles 45°, 50° and 55°, with weights 0.3, 0.4 and 0.3, respectively. If you have declared the angle of the parent object to be a fit parameter the set of 3 angles is moved automatically when the value of the center angle changes during the fit. This helps to adjust the distribution of angles to match the experimental settings.

Finally, there is a new fit parameter called ‘Scaling factor of angle range’. This number scales the distance of the individual angles to the center angle: If the factor is 1.0 the original angles are used. If the factor is 0.1, for example, a value of 5° becomes 0.5°. Varying the factor between 0.1 and 2 in the example shown above, you can compute spectra for a distributions between -0.5° … 0.5° to -10° … 10°.

We hope that this new flexibility helps to achieve better fitting results for spectra measured with microscopes or other systems with spectra features depending critically on the angle of incidence.

New spectrum type

Starting with object generation 5.11 we have introduced a new spectrum type called ‘Rp/Rs’. If you select this option the spectrum object computes the ratio of the intensity reflectance for p-polarized light and s-polarized light. This ratio can be directly fitted to experimental data if your measurement system produces this quantity as final result.

Tolerated intervals for integral quantities

Design targets for color values or other integrated spectral values may not always be well-defined numbers. You can also search for designs where color values stay within tolerated boundaries, but it does not matter where exactly.

Such design situations can be handled in CODE using so-called ‘penalty shape functions’. However, the use of this concept turned out to be rather complicated.

We have now (starting with version 5.02) introduced a very easy definition of tolerated intervals for integral quantities: Instead of typing in the target value you can define an interval by entering 2 numbers with 3 dots in between, like ’23 … 56′ or ‘-10 … -8’. If the integral value lies within the interval its contribution to the total fit deviation is zero. Outside the interval the squared difference between actual value and closest interval boundary is taken, multiplied by the weight of the quantity.

Leng model for optical constants

We have implemented (starting with object generation 4.99) the Leng oscillator which has been developed to model optical constants of semiconductors. As shown in the original article (Thin Solid Films 313-314 (1988) 132-136) it works well for crystalline silicon:


Warning: While the model works fine in the vicinity of strong spectral features (critical points in the joint density of states) it may generate non-physical n and k values in regions of small absorption.

Bugfix master model

With version 4.56 we have removed a bug in master models for optical constants. Saving a successful model and re-loading it could lead to a strange mix-up of parameter values in some situations, leaving the poor user with a useless configuration. We recently taught CODE and SCOUT to correctly count master and slave parameters – saving and loading should work now.

Gradient?

How do I describe a gradient of optical constants?

A gradient of optical constants can be implemented in an optical model using the layer type ‘concentration gradient’.

The strategy is this:
Define an effective medium, i.e. a mixture of 2 materials, using one of the simple effective medium objects Bruggeman, Maxwell-Garnett or Looyenga. The best choice for gradients is Bruggeman, also known as EMA. The only parameter to describe the topology of the mixture is the volume fraction.
In the next step, insert a concentration gradient layer into the layer stack and assign the effective medium material to it. The gradient will be described by a user-defined function that defines the depth dependence of the volume fraction from the top of the layer to the bottom. Select the layer and use the Edit command to access the formula definition window.
Finally, you go back to the layer stack definition window and specify how many sublayers should be used to sample the gradient. Be careful not to take too many sublayers – this could increase the computational times a lot. On the other hand, be careful not too ‘undersample’ the gradient – if you increase the number of sublayers by 1 the spectra should not change significantly.

Please consult the manual for further details …

Roughness

How do I describe surface roughness?

Roughness can be taken into account in different ways in optical models for thin films.

The easiest way is to replace the sharp interface between two materials by a thin layer with mixed optical constants. This is a good approach for roughness dimensions clearly below the light wavelength. Mixed optical constants can be computed using an effective medium model. The Bruggeman formula (also known as EMA – effective medium approximation) is a good choice (please read the remarks below). Start with a very thin intermediate layer (e.g. 2 nm) and select its thickness as fit parameter. The volume fraction of the effective medium model should be an adjustable fit parameter as well.
An advanced version of this kind of roughness modeling is to use a concentration gradient layer. This type of layer can describe a smooth transition between adjacent materials. The depth dependence of the volume fraction can be expressed by a user-defined formula with up to 6 parameters that can be fitted.
A warning must be raised using effective medium models to describe roughness effects. All effective medim models in our software are made for two phase composites which are isotropic in three dimensions. They are not valid for surface roughness effects. To apply these concepts anyway can be justified by saying that there is no reasonable alternative and that it is very common to do so …
Describing rough metal surfaces with an effective medium model may be a little tricky. This is especially true for silver. Inhomogeneous metal layers can be efficient absorbers with very special optical properties, and effective medium approaches can be very wrong – please read SCOUT tutorial 2 about this.

The roughness described by effective medium layers does not lead to light scattering but modifies the reflectance and transmittance properties of the interface between the adjacent materials. Light scattering at heavily rough interfaces can be taken into account in a phenomenological way in cases where the detection mechanism of the spectrometer system does not collect all the scattered radiation. You can introduce a layer of type ‘Rough interface’ which scales down the reflection and transmission coefficients for the electric field amplitude of the light wave. The loss function is a user- defined function which may contain 2 parameters C1 and C2 which are fit parameters. In addition, the symbol x in the formula represents the wavenumber.
The expression C1*EXP(-(X/C2)^2), for example, would describe an overall loss by a factor C1 and an additional frequency dependent loss which is large for small wavelengths and small for large wavelengths. This kind of approach has turned out to be satisfying in several cases.