2.3 The detector’s time behavior

A detector’s time resolution is limited by its response to an instantaneous change of the input signal. Due to electrical capacities of the light sensitive element and the electronics, the output signal does not change instantaneously but gradually increases or decreases until it reaches its final value. The detector’s rise time is defined as the time span required for the output signal to rise from a certain low percentage (usually 10 %) to a certain high percentage (usually 90 %) level of the maximum value when a steady input is instantaneously applied. The fall time is similarly defined as the time span required for the output signal to drop from a certain high percentage (usually 90 %) to a certain low percentage (usually 10 %) of the maximum value when a steady input is instantaneously removed.

Typically, the detector’s response to an instantaneous change of the input signal approaches the final value exponentially. The detector’s time behavior is thus best described by the time constant τ, which is the time span required for the output signal to vary from its initial value by 63 % of its final change (the value of 63 % is derived from 1 – 1/e, which equals 0.63). The temporal change of the output signal Y(t) from its initial value Y0 to its final value Yf is therefore given by

Y(t) = Y0 + ( Yf - Y0 ) × e t / τ

Gigahertz Optik’s integral detectors use photodiodes that are typically characterized by time constants in μs. Since most variable light sources change their intensity levels in significantly longer time scales, the detector’s time constant is not really an issue for most applications. However, lasers in particular are often pulsed with a frequency in the order of 109 Hz (for example in telecommunication), which corresponds to signal periods in the order of 1 ns. Here, the relatively slow response of normal photodiodes prevents the accurate characterization of the laser signal’s time characteristics.