%0 Journal Article %A Barry D. Jacobson %T Frequency Transformations and Spectral Gaps in Cochlear Implant Processing %D 2016 %R 10.1101/035824 %J bioRxiv %P 035824 %X In previous work[1] we created a mathematical model and identified a major source of distortion in Cochlear Implant(CI) processing which manifests itself in three forms, all of which are due to the nonlinear envelope processing algorithms which are widely used in some form or another in many current models. The first are spectral gaps or dead zones within the claimed frequency coverage range. This means that there exist regions of the spectrum for which there is no possible input that can produce an output at those frequencies. The second are frequency transformations which convert input tones of one frequency to tones of another frequency. Because this is a many-to-one transformation, it renders following a melody impossible, as the fundamental frequency of two different notes may be mapped to the same output frequency. This makes them impossible to distinguish, (although there may be differences in higher order harmonics that we will discuss). The third type of distortion are intermodulation products between input tones which yield additional output tones that were not present in either input. In the case of multiple talkers, these will compound the comprehension difficulty, as not only are the original spectral components of each speaker transformed, but additional nonexistent components have been added into the mix. This accounts for the great difficulty of CI users in noise.In this work, we extend our earlier work in three ways. First, we clarify our description of spectral gaps which a number of readers pointed out was unclear, in that it implied that certain input tones will produce no response at all. In fact, all input tones will produce a response, but in most cases, the output will be frequency-transformed to a different frequency which the CI is capable of producing. Second, we graphically illustrate the input/output frequency transformation, so that the reader can clearly see at a glance how each frequency is altered. The form of this transformation is a staircase over most of the usable range, meaning that for single, pure tones all frequencies in the passband of a particular channel are mapped to a single frequency—the center frequency of that channel. As frequency continues to increase, all frequencies in the passband of the next channel become mapped to the center frequency of that channel, and so on. The exception is in the low frequencies, for reasons that we discuss. Third, in our earlier work we analyzed the simple case of only two pure tones within a single channel. Here we extend to the more realistic case of mixtures of complex tones, such as musical notes or the vowels of speech which may each have multiple harmonics extending throughout much of the audible frequency range. We find that, as expected, the output components of a source within a single channel often clash (are dissonant) with each other, and with those output components of that source (higher harmonics) which fall within other channels. So that instead of there being a harmonic or integral relationship among the output spectral components of each source, these components are no longer related to each other harmonically as they were at the input, thus producing a dissonant and grating percept. Furthermore, in the case two or more complex tones, additional intermodulation components are produced that further distort the sound. All these assertions are derived from theoretical considerations, and also noted from the author’s own listening experience, and further confirmed from correspondence with other CI users. %U https://www.biorxiv.org/content/biorxiv/early/2016/01/01/035824.full.pdf