СДЕЛАЙТЕ СВОИ УРОКИ ЕЩЁ ЭФФЕКТИВНЕЕ, А ЖИЗНЬ СВОБОДНЕЕ
Благодаря готовым учебным материалам для работы в классе и дистанционно
Скидки до 50 % на комплекты
только до
Готовые ключевые этапы урока всегда будут у вас под рукой
Организационный момент
Проверка знаний
Объяснение материала
Закрепление изученного
Итоги урока
Данная рабочая тетрадь предназначена для занятий со студентами СПО отделения "звукооператорское мастерство" по английскому языку . Рабочая тетрадь содержит упражнения на проверку понимания содержания: вопросы с открытым ответом, "верно/неверно", множественный выбор. Упражнения распределены по главам и страницам книги. Ответы и глоссарий находятся в конце рабочей тетради. Упражнения можно использовать также в качестве заданий к зачёту.
РАБОЧАЯ
ТЕТРАДЬ
ДЛЯ ПРАКТИЧЕСКИХ РАБОТ ПО ДИСЦИПЛИНЕ
«ИНОСТРАННЫЙ ЯЗЫК(АНГЛИЙСКИЙ)» В РАЗДЕЛЕ
«общий гуманитарный и социально-экономический цикл»
53.02.08 «Музыкальное звукооператорское мастерство»
к книге Роберта Тофта «Звукозапись классической музыки», 2020
составитель преподаватель английского языка
Рябко Александра Кирилловна
Содержание
| стр |
Пояснительная записка Методические указания по изучению дисциплины. |
3 |
Глава 1 | 6 |
Глава 2 | 8 |
Глава 3 | 11 |
Глава 4 | 13 |
Глава 5 | 17 |
Глава 6 | 19 |
Глава 7 | 21 |
Глава 8 | 23 |
Глава 9 | 27 |
Глава 10 | 32 |
Глава 11 | 34 |
Глава 12 | 36 |
Глава 13 | 38 |
Глава 14 | 39 |
Глава 15 | 40 |
Приложение 1. Глоссарий | 44 |
Приложение 2. Ответы. | 55 |
Пояснительная записка
Рабочая тетрадь предназначена для практических работ по дисциплине «Иностранный язык» в разделе «общий гуманитарный и социально-экономический цикл»
по специальности 53.02.08 «Музыкальное звукооператорское мастерство». Основное назначение рабочей тетради – закрепить и активизировать языковой и речевой материал, автоматизировать лексико-грамматические навыки при работе с профессионально-ориентированными текстами. Данная рабочая тетрадь содержит комплекс упражнений, помогающим обучающимся совершенствовать навыки и умения самостоятельной работы со специальными текстами. Упражнения нацелены на быстрое и качественное запоминание профессиональных терминов, используемых по специальности «музыкальное звукооператорское мастерство», на базе профессионально-ориентированных текстов. Благодаря используемой системе упражнений данное пособие позволяет обучить студентов комплексу умений и навыков анализа смыслового содержания и логико-коммуникативной организации текста, необходимых как для полноты понимания читаемого, так и для его адекватного использования в речевой деятельности. Задания могут использоваться либо для выполнения домашних заданий, либо выступать в качестве заданий для повторения пройденного материала во время занятий
Рабочая тетрадь состоит из 15 разделов (Units) и двух приложений (Appendix 1), (Appendix 2). Материал каждого раздела (Unit) предусматривает последовательное, поэтапное изучение определенной темы, связанной с будущей профессиональной деятельностью обучающихся и принципов, применяемых в практике музыкального звукооператорского мастерства. В основу каждого урока положен принцип развития речевой деятельности: чтения и устной речи.
Приложение 1 (Appendix 1) включают словарь профессиональных терминов и глоссарий. Приложение 2 (Appendix 2) содержит ключи к упражнениям
Широкий спектр разнообразных практических заданий, организующих самостоятельную работу, требует от обучающихся творческого отношения при их выполнении (наличие заданий повышенной трудности- задания с развёрнутым ответом), позволяет реализовать личностно-ориентированный подход при работе с обучающимися в разным уровнем подготовки и с разными интересами. В тетрадь включены задания, готовящие обучающихся к объективному контролю и самоконтролю в процессе изучения английского языка.
Рабочая тетрадь соответствует уровню подготовки студентов по дисциплине «Иностранный язык (английский)» в разделе «общий гуманитарный и социально-экономический цикл» «Музыкальное звукооператорское мастерство».
Методические указания по изучению дисциплины.
В соответствии с ФГОС по дисциплине Иностранный язык (английский) для специальности 53.02.08 «Музыкальное звукооператорское мастерство». студент должен:
Требования к результатам освоения дисциплины
Студент по итогам изучения курса должен обладать рядом компетенций: осуществлять поиск и использование информации, необходимой для эффективного выполнения профессиональных задач, профессионального и личностного развития (ОК4); использовать информационно-коммуникационные технологии в профессиональной деятельности (ОК5); работать в коллективе и в команде, эффективно общаться с коллегами, руководством, потребителями (ОК6); брать на себя ответственность за работу членов команды (подчиненных), результат выполнения заданий (ОК7); самостоятельно определять задачи профессионального и личностного развития, заниматься самообразованием, осознанно планировать повышение квалификации (ОК8); ориентироваться в условиях частой смены технологий в профессиональной деятельности (ОК9).
В результате изучения дисциплины студент должен
-знать: лексический (1200-1400 лексических единиц) и грамматический минимум, необходимый для чтения и перевода (со словарем) иностранных текстов профессиональной направленности.
-уметь: -общаться (устно и письменно) на иностранном языке на профессиональные и повседневные темы; переводить (со словарем) иностранные тексты профессиональной направленности; самостоятельно совершенствовать устную и письменную речь, пополнять словарный запас.
-владеть: практическими навыками устной и письменной речевой деятельности на иностранном языке в процессе профессиональной деятельности.
- демонстрировать: способность и готовность: применять полученные знания на практике
Структура практических занятий включает в себя:
Exercises:
- пост-текстовые задания на проверку понимания содержания, стимулирующие развитие навыков на базе проблематики прочитанных текстов. Задания типа «множественный выбор», «верно/неверно» могут быть использованы для домашних заданий и беглого контроля понимания прочитанного; задания с развёрнутым ответом могут быть использованы для обсуждения в группе на занятии.
2. Приложение 1. (Appendix 1).Содержит языковой комментарий (Глоссарий), представляющий собой словарь с наиболее частотной лексикой и выражениями, встречающимися в сфере музыкального звукооператорского мастерства. Содержит лингвистический комментарий, объясняющий смысл основных профессиональных терминов.
Приложение 2. (Appendix2). Содержит ключи к упражнениям
Chapter 1
Page 3
1. What causes the compression and rarefaction
of air molecules?
A. Vibrating objects
B. Static objects
C. Sound waves
D. Light waves
2. What is a complete cycle of a sound wave
measured in?
A. Degrees
B. Meters
C. Seconds
D. Hertz
3. What shape does the simplest waveform, the sine wave, have?
A. Square
B. Triangle
C. Sine
D. Circle
4. What is the peak of a sound wave represented in degrees?
A. 90В°
B. 180В°
C. 270В°
D. 360В°
5. How do physicists usually represent soundwaves?
A. By drawings of air molecules
B. By undulating lines on a graph
C. By sound recordings
D. By visual art
Page 5 true/false
1. Instruments generate only a single frequency for each note, according to the document.
2. The fundamental frequency is the first harmonic of the series.
3. Complex waveforms with a recognizable harmonic timbre consist of a collection of sine waves that are not integer multiples of the fundamental frequency.
4. The sine waves described in the document have pitch but lack timbrel quality.
5. Noise is an example of complex waves with harmonic timbre.
Page 6
1. What is the mathematical relationship between overtones and the fundamental frequency on a vibrating string?
2. How does the harmonic series relate to musical notes on a staff?
3. What contributes to the characteristic timbre of musical instruments?
4. What examples are given to illustrate the differences in overtone emphasis between instruments?
5. What visual tools are mentioned for analyzing the overtone series of a violin note?
Page 9
1. What does the dotted line in Figure 1.11 represent?
A. Direct sound of the first wavefront
B. Early reflections
C. Late reflections
D. Reverberation
2. When do early reflections begin according to the document?
A. At 10 ms
B. At 30 ms
C. At 50 ms
D. At 80 ms
3. What happens to the reflections after about 80 ms?
A. They become distinguishable
B. They fade to silence
C. They become early reflections
D. They become direct sound
4. What do late reflections provide to the sound?
A. A sense of direction
B. A sense of fullness
C. A sense of distance
D. A sense of clarity
5. What does direct sound help listeners determine?
A. The size of the room
B. The location of the source
C. The fullness of the sound
D. The distance from the source
Page 9-10 true/ false
1. A concert hall must be quiet enough for very soft passages to be clearly audible.
2. Reverberation time is not an important factor in determining the quality of a performance venue.
3. Opera houses have the longest reverberation times compared to orchestral and chamber music venues.
4. Orchestral halls generally have reverberation times between 1.55 and 2.05 seconds.
5. Chamber halls have longer pre-delays than orchestral halls.
Page 11-12
1. What is the effect of early reflections in a reverberant space?
A. They create a sense of intimacy
B. They make the sound more diffuse
C. They have no effect on sound quality
D. They only affect larger halls
2. Why are rectangular-shaped performance spaces preferred according to Beranek?
A. They have more early reflections from nearby sidewalls
B. They are wider than other shapes
C. They have longer reverberation times
D. They are easier to construct
3. What is the ideal reverberation time (RT60) for choral music in large cathedrals?
A. 1.4 seconds
B. 2.5 seconds or more
C. 1.5 seconds
D. 0.5 seconds
4. What contributes to a sense of spaciousness in narrow shoebox-shaped rooms?
A. Abundance of early lateral reflections
B. Lack of reverberation
C. Presence of only late reflections
D. Smaller dimensions
5. What type of music benefits from narrower rectangular halls with strong early reflections?
A. Choral music
B. Jazz music
C. Eighteenth-century concertos and symphonies
D. Pop music
Chapter 2
Page 13
1. What does a microphone do in the audio chain?
A. It amplifies sound
B. It makes an electrical copy of sound waves
C. It converts digital signals to analog
D. It stores audio recordings
2. What is the purpose of an analog-to-digital converter (ADC)?
A. To amplify sound
B. To change voltage into a digital form
C. To transduce electrical current into sound waves
D. To store audio recordings
3. What does a digital-to-analog converter (DAC) do?
A. It converts digital signals into variations of voltage
B. It amplifies sound before it reaches the speakers
C. It makes an electrical copy of sound waves
D. It generates line-level output
4. What is the role of an amplifier in the audio chain?
A. To convert sound waves into electrical signals
B. To generate line-level output from a weak current
C. To transduce electrical current into sound waves
D. To store audio recordings
5. What happens to the audio signal as it travels along the audio chain?
A. It remains unchanged
B. It is amplified only
C. It may degrade due to multiple devices
D. It is stored digitally
Page 14
1. What does the term 'analog' refer to in the context of audio recording?
2. How does digital audio differ from analog audio in terms of signal representation?
3. What are the three components of Pulse Code Modulation (PCM) and what does each component do?
4. Who invented Pulse Code Modulation (PCM) and in what decade did this occur?
5. What does the term 'bit' stand for in digital audio systems, and what does it signify?
Page 15
1. What is the process of measuring the voltage of an electrical audio signal at regular intervals called?
A. Sampling
B. Aliasing
C. Reconstruction
D. Nyquist Theorem
2. What sampling rate is commonly used for CD distribution?
A. 40k
B. 44.1k
C. 48k
D. 96k
3. According to the Nyquist theorem, how often must measurements be taken to accurately recreate an analog signal?
A. At least once per second
B. At least twice the highest frequency
C. At least four times the highest frequency
D. At least half the sampling rate
4. What occurs when sampling rates are below the minimum dictated by the Nyquist theorem?
A. Reconstruction
B. Aliasing
C. Sampling
D. Filtering
5. What is the upper limit of human hearing in kHz?
A. 10 kHz
B. 20 kHz
C. 30 kHz
D. 40 kHz
Page 15 true false
1. A group of eight digits is called a byte.
2. 24-bit audio allows for a dynamic range of 120 dB.
3. The maximum frequency a digital system can represent is equal to the sampling rate.
1. Sampling is the process of measuring the voltage of an electrical audio signal at irregular intervals.
2. The Nyquist theorem states that measurements must be taken at a rate twice the highest frequency in the signal.
3. A sampling rate of 44.1k is sufficient for accurately reconstructing waveforms for human hearing.
4. Aliasing occurs when too many samples are taken of a signal.
5. The minimum sampling rate required for the upper limit of human hearing is 40k.
Page 16 true false
1. Sampling imposes a continuous flow of data on a signal.
2. Quantization introduces errors into the system that can be heard as nonrandom noise.
3. A 3-bit scale has eight possible steps.
4. Dithering is used to reduce the audibility of errors in an audio signal.
5. The addition of random noise to a signal can replace nonrandom distortion with a more pleasing noise spectrum.
Page 16
1. What is the effect of quantization on a digital signal?
2. How does the number of bits in a scale affect the rounding error?
3. What is the purpose of dithering in audio processing?
4. What happens during the process of requantization?
5. Why is random noise preferred over nonrandom noise in audio signals?
Page 16
1. What does sampling impose on a signal?
A. A succession of discrete measurement points
B. A continuous flow of data
C. An infinite number of values
D. A random selection of points
2. What is the effect of quantization on a signal?
A. It introduces errors into the system
B. It enhances the signal quality
C. It eliminates noise completely
D. It increases the number of measurement points
3. What is the purpose of dithering in audio processing?
A. To add random noise to the signal
B. To reduce the bit depth without distortion
C. To increase the amplitude of the audio
D. To eliminate all rounding errors
4. How many steps does a 16-bit scale have?
A. 65,536 steps
B. 256 steps
C. 16 steps
D. 4 steps
5. What happens when the amplitude of audio falls to its lowest levels?
A. The relative size of the error becomes larger
B. The audio becomes inaudible
C. The quantization error disappears
D. The noise becomes random
Page 17
1. What is one of the commonly used types of dither mentioned in the document?
A. TPDF B. MP3 C. WAV D. AIFF
2. What does noise shaping concentrate on according to the document?
A. Lower frequencies
B. Higher frequencies
C. Mid frequencies
D. All frequencies
3. What is the perceived dynamic range of 16-bit audio according to the document?
A. 96.0 dB
B. 118.0 dB
C. 120.0 dB
D. 80.0 dB
4. What is the main benefit of 24-bit audio production mentioned in the document?
A. Lower noise floor
B. Higher sample rate
C. Better sound quality
D. Easier editing
5. What can mask any improvements from dithering in a recording chain?
A. High levels of noise
B. Low bit depth
C. High sample rate
D. Poor quality equipment
Page 17 true false
1. TPDF is a commonly used type of dither.
2. Noise shaping increases the noise level in the audible frequency range.
3. The dynamic range of 16-bit audio is 118.0 dB.
4. Dithering is applied as the last stage of preparing tracks for delivery.
5. A 16-bit noise floor is above what humans can hear.
Page 17
1. What is TPDF and how does it relate to audio dithering?
2. What is the purpose of noise shaping in audio production?
3. How does increasing the bit depth from 16 to 24 bits affect audio quality?
4. What is the significance of the noise floor in audio recording?
5. What precautions should engineers take when applying dither to audio tracks?
Page 18 True/false
1. A DAC converts analog signals to digital code.
2. The sound quality of digital audio depends on sample rate and bit depth.
3. Higher sample rates decrease the frequency bandwidth devices can encode.
4. High-resolution audio has a bit depth of at least 24.
5. The DAC sends the waveform through a reconstruction filter to smooth the signal.
Chapter 3
Page 23 True/ False
1. A microphone converts soundwaves from mechanical energy to electrical energy.
2. The diaphragm of a microphone responds to pressure differences between its front and back sides.
3. Soundwaves arriving at the edge of the diaphragm cause it to move.
4. A freely suspended diaphragm produces a bidirectional pickup pattern.
5. Listeners hear an accurate representation of the sound source when the microphone signal is not replicated faithfully.
Pages 25-26
1. What type of pickup pattern does a single diaphragm suspended between two points naturally exhibit?
A. Unidirectional
B. Bidirectional
C. Omnidirectional
D. Cardioid
2. What shape does the cardioid pickup pattern resemble?
A. Circle
B. Square
C. Heart
D. Triangle
3. What happens to the signals generated by the two capsules at the rear of the microphone?
A. They amplify each other
B. They cancel each other
C. They create a feedback loop
D. They are ignored
4. What is the primary function of combining a bidirectional mic with a unidirectional mic?
A. To create a cardioid pattern
B. To increase sound quality
C. To reduce weight
D. To enhance bass response
5. What does the Omni portion of the mic respond to?
A. Sound from the front
B. Sound from the rear
C. Sound from the sides
D. Sound from all directions
Pages 24-25 Questions:
1. What is the basic operating principle of condenser microphones?
2. How does a pressure transducer microphone respond to sound waves?
3. What materials are commonly used for the diaphragm in pressure transducer microphones?
4. What is the purpose of the small holes in the backplate of a pressure transducer?
5. How do manufacturers compensate for high-frequency loss when using omnidirectional microphones in a reverberant sound field?
Page 27 (Figure. 3.6 ) True/False
1. Today's cardioid microphones use principles of acoustic delay to achieve the same result from a single capsule.
2. The first cardioid microphones contained a single capsule.
3. Manufacturers place a delay network behind the diaphragm in cardioid microphones.
4. Acoustic resistance can be used to achieve the desired delay in cardioid microphones.
5. Figure 3.6 shows a complex device with multiple entry slots for sound.
Pages 27-28 Questions:
1. What is the purpose of introducing a delay in capsules for microphones?
2. What are the acceptance angles for cardioid, supercardioid, and hypercardioid microphones?
3. How do microphone designers define an acceptance angle?
4. What is the significance of locating sound sources within a microphone's pickup angle?
5. What is the function of dual diaphragm capsules in microphones?
Pages 28-29
1. What principle do dynamic and ribbon microphones operate on?
A. Electromagnetic induction
B. Capacitive coupling
C. Optical sensing
D. Piezoelectric effect
2. What type of pattern do dynamic microphones usually have?
A. Omnidirectional
B. Cardioid
C. Bidirectional
D. Supercardioid
3. What material is used for the diaphragm in ribbon microphones?
A. Plastic
B. Copper
C. Corrugated aluminum
D. Steel
4. What do ribbon microphones require before their signal can be used in an audio chain?
A. Minimal amplification
B. Considerable amplification
C. No amplification
D. Battery power
5. What is a characteristic of the diaphragm in ribbon microphones?
A. High mass
B. Low mass
C. Large size
D. Thick material
Chapter 4
Page 31 True/False
1. Microphones ideally should respond equally well to all frequencies across the normal range of human hearing, which is 20-20,000 Hz.
2. Manufacturers find it easy to attain a flat frequency response in large spaces.
3. In a diffuse field, rooms tend to absorb treble frequencies.
4. Microphone designers often boost the sensitivity of omnidirectional mics by 2.0-4.0 dB in the region of 10 kHz.
5. The typical response of a microphone shows a sensitivity boost centered on 20 kHz.
Pages 32-33 Questions:
1. What does the polar response of a microphone indicate?
2. How do omnidirectional microphones respond to sound from different directions?
3. What happens to the sensitivity of omnidirectional microphones at higher frequencies?
4. What are the characteristics of cardioid microphones?
5. How do supercardioid and hypercardioid microphones differ from cardioid microphones?
Page 35 True/False
1. All microphones that work on a pressure-gradient principle exhibit proximity effect.
2. The proximity effect is particularly noticeable at distances greater than 30 centimeters.
3. The Inverse Square Law states that sound intensity decreases proportionally to the square of the distance from the source.
4. Sound pressure reduces by half for every doubling of the distance from the source.
5. The critical distance is the point in a room where direct sound and reverberation are equal in level.
Pages 32 -33
1. What does the polar response of a microphone indicate?
A. Its sensitivity to sounds arriving from any location
B. Its ability to capture sound only from the front
C. Its frequency response range
D. Its physical size
2. What is the typical response pattern of omnidirectional microphones at lower frequencies?
A. A perfect circle C. A wide lobe
B. A narrow band D. A figure-eight pattern
3. At what frequency does the omnidirectional microphone begin to exhibit normal narrowing of response?
A. 2.0 kHz C. 8.0 kHz
B. 4.0 kHz D. 12.5 kHz
4. What type of microphone features a reasonably wide pickup area at the front and a large null at the rear?
A. Omnidirectional
B. Cardioid
C. Supercardioid
D. Bidirectional
5. What do supercardioid and hypercardioid microphones do compared to cardioid microphones?
A. They have a wider pickup area
B. They restrict the width of their front lobes
C. They capture sound equally from all directions
D. They have no null regions
Pages 34-35 Questions:
1. What is the Random Energy Efficiency (REE) of omnidirectional microphones and how is it used as a reference for directional microphones?
2. How do bi-directional and cardioid microphones compare in terms of their REE and what does this indicate about their sound pickup?
3. What is the distance factor of a cardioid microphone and what does it imply about its placement relative to an omnidirectional microphone?
4. What effect does the distance between a microphone and its sound source have on the recording character?
5. What is the critical distance in microphone placement and why is it significant?
Pages 36-37 Questions:
1. What role does the Inverse Square Law play when a microphone is positioned close to a sound source?
2. How does the distance from the sound source affect the sound pressure level on the rear side of the diaphragm?
3. What discrepancies in sound pressure level were determined by John Woram for a microphone placed at different distances from the sound source?
4. What is the effect of proximity on sound pressure levels for lower frequencies compared to higher frequencies?
5. What does Figure 4.9 illustrate regarding the proximity effect in a cardioid microphone?
Pages 37-39 Phase
1. What does the term 'phase' refer to in the context of a periodic wave?
A. The starting position of a wave in its cycle
B. The frequency of the wave
C. The amplitude of the wave
D. The speed of the wave
2. What occurs when the peaks of one signal coincide with the troughs of another?
A. Constructive interference
B. Destructive interference
C. Phase alignment
D. Wave amplification
3. What is the result when two soundwaves arrive at a microphone at the same time?
A. Destructive interference
B. Constructive interference
C. Phase cancellation
D. Sound distortion
4. What is comb filtering a result of?
A. Phase alignment of frequencies
B. Mathematically related cancellations and reinforcements
C. Increased amplitude of soundwaves
D. Frequency modulation
5. What happens to the amplitude of a frequency when it arrives 'in phase' at two microphones?
A. It decreases
B. It remains the same
C. It doubles
D. It cancels out
Pages 39-40 Questions:
1. What is the 'three-to-one' principle in microphone placement, and how does it help in reducing comb filtering effects?
2. What is comb filtering, and how does it affect sound quality?
3. How does the time delay between microphones influence the audibility of comb filtering?
4. What amplitude differences can occur when two microphones have identical output levels, and how can this be mitigated?
5. What is the recommended attenuation level at one microphone to make comb filtering tolerable for most listeners?
Page 40-41 True-false
1. Engineers can minimize the negative effect of distortion by reducing the difference in amplitude between the peaks and troughs caused by phase shifts.
2. An attenuation of 9.0 to 10.0 dB at one of the microphones can decrease amplitude differences to about 4.0 dB.
3. Comb filtering becomes tolerable for most listeners at amplitude differences greater than 4.0 dB.
4. The 'three-to-one' principle involves placing two microphones at equal distances from a sound source.
5. The testing procedure for the 'three-to-one' principle was carried out in an anechoic chamber.
Page 41
1. What distance from the on-axis mic did noticeable reinforcements and cancellations occur when the second transducer was located?
A. 2 feet (61 centimeters)
B. 4 feet (122 centimeters)
C. 10 feet (305 centimeters)
D. 6 feet (183 centimeters)
2. At what distance did the combined signals parallel the on-axis response of mic number one?
A. 2 feet (61 centimeters)
B. 4 feet (122 centimeters)
C. 10 feet (305 centimeters)
D. 6 feet (183 centimeters)
3. What was the maximum distance at which skilled listeners did not notice audible improvements in sound quality?
A. 2 feet (61 centimeters)
B. 4 feet (122 centimeters)
C. 6 feet (183 centimeters)
D. 10 feet (305 centimeters)
4. What ratio between microphones was found to avoid noticeable phase interference?
A. One-to-one
B. Two-to-one
C. Three-to-one
D. Four-to-one
5. What is the distance in centimeters for 10 feet?
A. 305 centimeters
B. 244 centimeters
C. 183 centimeters
D. 61 centimeters
Page 42-43 Questions
1. What is the recommended difference in decibels (dB) between the primary sound source and the background sounds picked up by a microphone in a multiple microphone setup?
2. What principle should recordists follow to maintain sufficient phase integrity in concert or recital hall locations?
3. How can recordists check the attenuation of a microphone during rehearsal?
4. What is the function of the Auto Align plugin in audio recording?
5. What should engineers do in situations where close miking and signal splitting are required?
Pages 42-43
1. What is the recommended difference in dB between microphones in a multiple microphone setup?
A. 5.0 to 6.0 dB
B. 7.0 to 8.0 dB
C. 9.0 to 10.0 dB
D. 11.0 to 12.0 dB
2. What principle should recordists follow to maintain sufficient phase integrity?
A. Two-to-one principle
B. Three-to-one principle
C. Four-to-one principle
D. Five-to-one principle
3. What does the Auto Align plugin do?
A. Enhances the sound quality
B. Analyzes pairs of signals for time delay
C. Records audio signals
D. Adjusts microphone placement
4. What should the level at the remaining microphone drop by when the performer stops playing?
A. At least 5.0 dB
B. At least 7.0 dB
C. At least 9.0 dB
D. At least 11.0 dB
5. What does the Auto Align plugin allow engineers to enhance?
A. The volume of the audio
B. The sense of space
C. The clarity of the audio
D. The pitch of the audio
Chapter 5
Pages 45-46
1. What is the optimum angle for stereo perception according to the document?
A. 15В°
B. 30В°
C. 45В°
D. 60В°
2. What two cues do listeners use to locate the origin of sounds?
A. Frequency and amplitude
B. Intensity and time of arrival
C. Pitch and loudness
D. Volume and distance
3. What is required for listeners to perceive sounds as coming from various points along the horizontal plane?
A. No level differences
B. Level and time discrepancies
C. Only time differences
D. Only level differences
4. What happens if the outer instruments lie beyond the pickup angle in stereo miking?
A. They will be localized properly
B. They will not be localized properly
C. They will create a phantom image
D. They will enhance the stereo effect
5. What is the effect of a difference of 15.0 to 20.0 dB or a delay of about 1.5 ms?
A. It causes the phantom image to shift to one of the speakers
B. It creates a stronger stereo effect
C. It eliminates the need for microphones
D. It reduces sound pressure level
Pages 45-46 Questions:
1. What is the significance of phase integrity in stereo recording?
2. How do listeners use intensity and time of arrival to locate sounds?
3. What is the optimum angle for stereo perception and why is it important?
4. What happens when there are no level or time differences in a stereo playback system?
5. How does stereo miking utilize level and time-of-arrival differences?
Pages 47-48
1. What is the purpose of using coincident pairs of microphones?
A. To eliminate time lags between microphones
B. To increase the volume of sound
C. To reduce background noise
D. To capture sound from multiple directions
2. What type of microphones are generally used in an X-Y configuration?
A. Omnidirectional microphones
B. Dynamic microphones
C. Cardioid microphones
D. Condenser microphones
3. What angle is commonly used between microphones in an X-Y configuration?
A. 60В° to 90В°
B. 90В°
C. 80В° to 130В°
D. 45В°
4. What effect does the X-Y configuration have on the stereo sound stage?
A. Creates a narrow sound stage
B. Produces a strong sense of lateral spread
C. Eliminates all background noise
D. Only captures sound from the center
5. What is the advantage of using omni pairs in X-Y configurations?
A. They create a stronger central image
B. They maintain a stable image when a soloist moves
C. They capture sound from a wider angle
D. They are less sensitive to off-axis sounds
Pages 48-49 Questions:
1. What is the Blumlein technique and who developed it?
2. How does the Blumlein technique create a phantom image during playback?
3. What happens to the output of the microphones when sound sources are placed to the right or left of the 0В° axis?
4. What are the advantages of positioning the Blumlein microphones within the critical distance in a hall?
5. What is one disadvantage of the Blumlein technique related to side reflections?
Pages 48-49 Questions:
1. Who developed the Blumlein technique?
A. Thomas Edison
B. Alan Blumlein
C. Leonardo da Vinci
D. Nikola Tesla
2. What angle are the microphones set at in the Blumlein technique?
A. 45В°
B. 90В°
C. 180В°
D. 30В°
3. What does the Blumlein technique primarily capture during playback?
A. Direct sound only
B. Ambient sound only
C. A strong phantom image
D. Only lateral reflections
4. What is a disadvantage of the Blumlein technique?
A. It captures only direct sound
B. It requires expensive equipment
C. It may result in a 'hollow' sound quality
D. It cannot be used in live settings
5. What happens to the output of the microphones when sound sources are placed at a 45В° angle?
A. Both microphones increase output
B. One microphone reaches maximum output while the other is zero
C. Both microphones remain silent
D. The output is equal for both microphones
Pages 49-50 True/ False
1. The Mid-Side (M/S) technique was developed in the 1950s by Holger Lauridsen of Danish State Radio.
2. A cardioid microphone captures the sides of the ensemble in the M/S technique.
3. The M/S technique allows engineers to manipulate the two signals to change the width of the stereo sound stage.
4. In the M/S technique, the signal from the side microphone is sent to a single channel panned center.
5. More side signal in the M/S technique broadens the stereo sound stage.
Pages 51-52 True/ False
1. The ORTF technique uses two cardioid microphones angled at 90В°.
2. The NOS configuration separates the microphone capsules by 30 centimeters.
3. The ORTF technique was developed by the Nederlandse Omroep Stichting.
4. Comb filtering occurs at frequencies above 1 kHz in the ORTF technique.
5. The NOS technique is more useful for monophonic broadcasting than the ORTF technique.
Pages 51 -52 Questions:
1. What is the purpose of using near-coincident arrays in audio recording?
2. Describe the ORTF microphone configuration and its significance.
3. What are the characteristics of the sound produced by the ORTF technique at different frequencies?
4. Explain the NOS microphone configuration and its limitations.
5. What do recordists appreciate about the ORTF array according to Bruce and Lenny Bartlett ?
Chapter 6
Page 59-60
1. What is the primary focus of critical listening during the recording process?
A. Evaluating sound quality
B. Choosing instruments
C. Editing lyrics
D. Selecting a recording studio
2. Which of the following is NOT one of the criteria for assessing audio signals?
A. Transparency
B. Timbre
C. Volume
D. Stereo Image
3. What does the term 'Loudness' refer to in the context of audio recording?
A. The clarity of details
B. The dynamic range of the music
C. The balance of sound sources
D. The tonal characteristics of sound
4. What aspect of sound does 'Spatial Environment' assess?
A. The clarity of lyrics
B. The ambience for sound sources
C. The volume of the recording
D. The type of instruments used
5. What is the goal of removing extraneous noise from a recording?
A. To enhance the stereo image
B. To distract listeners
C. To improve communication of performers' emotion
D. To increase the volume of the track
Pages 59-60 Questions:
1. What are the key criteria for assessing audio signals during the recording process according to the European Broadcasting Union?
2. How does the perception of room ambience affect the recording of soloists or ensembles?
3. What role does transparency play in critical listening during the recording process?
4. Why is it important for engineers to ensure the intelligibility of individual elements in a recording?
5. What is the significance of dynamic range in a music recording?
Page 60 true false
1. Clipping occurs in digital systems when the signal exceeds the 0.0 dBFS threshold.
2. Analog consoles produce noticeable distortion only when the input signal exceeds the nominal 0.0 dBu level by 10.0 decibels.
3. Recordists should set their levels from the loudest passages to avoid clipping.
4. Approximately 20.0 dB of headroom is recommended to prevent peak levels from causing distortion in digital systems.
5. The noise floor of 24-bit systems can remain 90.0 to 100.0 dB below the average signal level.
Pages 61-62
1. What is the critical distance in a room?
A. The distance where sound is completely absorbed
B. The distance where only direct sound is captured
C. The distance where only reverberation is captured
D. The distance where direct and reverberant sound are equal
2. What does an SPL meter help engineers find?
A. The best recording software
B. The type of microphones to use
C. The critical distance of a room
D. The volume of the sound source
3. What is the effect of doubling the distance from the sound source in the free field?
A. The level of direct sound drops by 6.0 dB
B. The level of reverberated sound increases
C. The sound becomes clearer
D. The sound is completely absorbed
4. What is the purpose of placing microphones close to the sound source?
A. To capture clarity of detail
B. To capture only reverberation
C. To eliminate all sound
D. To create a louder sound
5. What do engineers do if a single stereo pair of microphones does not produce satisfactory results?
A. Stop recording
B. Use two pairs of microphones
C. Change the recording location
D. Use a different type of instrument
Pages 61-62 Questions:
1. What factors influence the placement of microphones in relation to room ambience during tracking?
2. How do engineers determine the critical distance in a room?
3. What is the significance of the sweet spot in microphone placement?
4. What is the third option for microphone placement mentioned in the document, and what does it require?
5. What considerations do recordists take into account when deciding on the perspective to present to listeners?
Chapter 7
Pages 65-66
1. What is the purpose of an equalizer in audio editing?
A. To boost low frequencies
B. To block all frequencies
C. To modify the frequency content of signals
D. To enhance the volume of tracks
2. What type of filter allows frequencies above a cut-off point to pass?
A. High-pass filter
B. Low-pass filter
C. Band-pass filter
D. Notch filter
3. What is the cut-off frequency defined as?
A. The point where all frequencies are blocked
B. The point at which the response is 3.0 dB below the nominal level
C. The maximum frequency allowed through the filter
D. The minimum frequency allowed through the filter
4. What does a drop of 6.0 dB per octave indicate?
A. A steep attenuation slope
B. A gentle attenuation slope
C. No attenuation
D. Complete signal loss
5. In a band-pass filter, what type of frequencies are allowed to pass?
A. All frequencies
B. Only frequencies above a certain point
C. Only low frequencies
D. Only frequencies within a defined range
Pages 65-66
1. What historical issue in telephonic communication led to the development of equalizers?
2. How do audio editors use equalization (EQ) in their work?
3. What are the different types of digital filters mentioned in the document, and how are they classified?
4. What is the significance of the cut-off frequency in a filter?
5. What is the difference between a gentle and a steep attenuation slope in filters?
Pages 67-68 true false
1. Shelf filters can only cut frequencies and cannot boost them.
2. Parametric filters provide flexibility in shaping the frequency content of soundwaves.
3. The Pro-Q2 plugin allows users to incorporate other types of filters beyond the three main adjustable parameters.
4. A gain change of -3.0 dB was applied to one of the center frequencies in the Pro-Q2 filter example.
5. Shelf filters do not allow any frequencies to pass through.
Pages 68-69
1. What does the Q factor refer to in audio engineering?
A. The degree of alteration affecting neighboring frequencies
B. The maximum gain change applied to a frequency
C. The bandwidth of a cut or boost
D. The center frequency of a track
2. What is the effect of a high-pass filter as described in the document?
A. It boosts low frequencies
B. It alters the frequency balance by up to 12.0 dB
C. It adds warmth around 100 Hz
D. It removes rumble in the low end
3. How can audio engineers shape frequency areas according to the document?
A. By using only high-pass filters
B. Through bell-style curves and incorporating pass and shelf filters
C. By adjusting the volume of the track
D. By limiting the number of frequency bands used
4. What does a narrower Q setting affect according to the document?
A. It spreads the change over a larger bandwidth
B. It restricts the range of frequencies affected
C. It increases the maximum gain change
D. It has no effect on the neighboring frequencies
5. What is the maximum gain change that can be applied in Voxengo's Marvel CEQ?
A. 10.0 dB
B. 15.0 dB
C. 12.0 dB
D. 20.0 dB
Page 70 Questions:
1. What is the purpose of a high-pass filter in audio mixing?
2. How can a shelf filter for higher frequencies enhance the audio mix?
3. What is the significance of sweeping in audio mixing?
4. Why is the 2-5 kHz frequency range important in audio mixing?
5. What did the research by Fletcher and Munson reveal about human hearing and frequency perception?
Chapter 8
Page 73 Questions:
1. What was the original purpose of compressors in audio engineering?
2. How do modern compressors differ from the manual methods used by engineers before their invention?
3. What are the three main stages of a compressor?
4. What user-adjustable features are found in the gain-control stage of a compressor?
5. Why do engineers apply make-up gain after the compression process?
Pages 73-75
1. What does the Peak/RMS switch in a compressor allow users to choose?
A. The way the plugin detects the level of a signal
B. The type of audio file to compress
C. The amount of gain reduction
D. The frequency range to compress
2. What does RMS stand for?
A. Real Mean Sound
B. Root Mean Square
C. Relative Maximum Signal
D. Random Mean Square
3. What happens when the signal rises above the threshold in a compressor?
A. The threshold is adjusted
B. The compressor stops working
C. The signal is amplified
D. The compressor automatically lowers the level
4. According to some engineers, how do instruments and voices sound better when compressors respond to?
A. Averaged loudness
B. Peak values
C. RMS values only
D. Average loudness only
5. What does the threshold set in a compressor?
A. The minimum signal level
B. The maximum gain allowed
C. The level at which the compressor begins to reduce amplitude
D. The frequency response of the compressor
Pages 75-77 true false
1. The compression ratio is determined by the amount of gain reduction that occurs when a signal exceeds the threshold.
2. A 2:1 compression ratio means that a 2.0 dB increase in input level results in a 1.0 dB increase in output level.
3. At a 4:1 ratio, an 8.0 dB increase in input level produces an output rise of 2.0 dB.
4. A ratio of 8:1 means that every 8.0 dB above the threshold at input is reduced to 1.0 dB above the threshold at output.
5. At ratios above 10:1, the plugin effectively becomes a brickwall limiter.
Pages 77-78 Questions:
1. What is the role of the attack control in a compressor?
2. How does the release control affect the compressor's output?
3. What is the difference between soft knee and hard knee in a compressor?
4. What is the purpose of the final gain control in a compressor?
5. How does the hold control function in a compressor?
Page 79 true false
1. Limiters allow audio below a specified decibel level to pass freely while attenuating peaks that cross a user-defined threshold.
2. Brickwall limiters have a ratio of less than 10:1.
3. The true-peak limiter ISL 2 measures true peaks and allows engineers to set a maximum decibel level for signals.
4. Users can adjust the amplitude of the incoming signal through the Input Gain box in the ISL 2 plugin.
5. The TPLim box is used to select a target for the inter-sample peak limit of the signal in the ISL 2 plugin.
Pages 80-81 true false
1. The GUI displays the amount of gain reduction applied to each channel in dBTP.
2. The history graph shows only the input level of the audio signal.
3. When only one side triggers the limiter, gain reduction is applied to both channels equally.
4. The meter indicates the degree of steering that has occurred when limiters are employed independently on the two stereo channels.
5. Ducking occurs when one signal is raised above another.
Pages 81-82
1. What does the Look Ahead box allow engineers to specify?
A. The amount of gain reduction
B. How much time the limiter should react to the entering signal
C. The output level in dBTP
D. The target bit depth
2. What does the Release function establish?
A. How quickly the limiter returns to no gain reduction
B. The amount of independence of the limiters
C. The target bit depth
D. The type of white noise used
3. What does the Auto button do in the plugin?
A. It analyzes the incoming audio signal for low-frequency information
B. It sets the target bit depth
C. It adjusts the output according to the amount of Input Gain
D. It allows users to hear only the difference between input and output
4. What does the Dither box set?
A. The degree of independence of the limiters
B. The amount of gain reduction
C. The nature of the noise shaping
D. The target bit depth
5. What does the Shaping function specify?
A. The target bit depth
B. The nature of the noise shaping
C. The amount of independence of the limiters
D. The output level in dBTP
Page 82 true false
1. The bx_dynEQ is a static EQ that does not change its gain settings dynamically.
2. Compression can be applied in conjunction with an EQ filter according to the document.
3. Traditional static EQs are suitable for intermittent correction of frequency problems.
4. The bx_dynEQ can assist engineers in applying gain reduction to user-defined frequency bands.
5. Brainwork has developed the bx_dynEQ software for dynamic EQ filtering.
Pages 83-84
1. What type of graphical interface does Sonnox use for their Oxford Dynamic EQ?
A. Replicates hardware control knobs
B. Graphical interface similar to parametric EQ plugins
C. Text-based interface
D. Analog-style interface
2. How many distinct frequency bands can be applied in the Oxford Dynamic EQ?
A. Three B. Four C. Five D. Six
3. What does the 'Detect' column in the Oxford Dynamic EQ allow users to choose?
A. The way the plugin identifies the level of a signal
B. The type of filter to apply
C. The overall output level
D. The target gain for the band
4. What does the Attack parameter determine in the Oxford Dynamic EQ?
A. The overall output level
B. The resting gain of the equalizer band
C. The type of filter applied
D. How quickly the band approaches the target level
5. What is the purpose of the Irim control in the Oxford Dynamic EQ?
A. To adjust the Q of the bell filter
B. To set the offset gain
C. To adjust the overall output of the plugin
D. To choose the frequency bands
Pages 83-84 Questions:
1. Describe the approach Sonnox takes in designing the GUI for their Oxford Dynamic EQ compared to other dynamic EQs.
2. What types of filters does the Oxford Dynamic EQ provide, and how can they be applied to the frequency bands?
3. Explain how the plugin prevents over-processing of the audio signal.
4. What functionality does the headphone icon provide in the Oxford Dynamic EQ?
5. How do the Attack and Release parameters affect the dynamics processing in the Oxford Dynamic EQ?
Pages 84-85 true false
1. Recording a vocalist through a closely placed microphone can exaggerate sibilance associated with consonants like 's' and 'sh'.
2. Audio engineers should always increase the energy of the bands containing sibilant frequencies to avoid listener fatigue.
3. Some audio editors manually lessen the effect of harsh consonants through track automation or by cutting out sibilant moments.
4. Notched equalization is applied within the 4-10 kHz range to ease the stridency of problematic areas.
5. De-essers are specialized compressors designed to automate the process of reducing sibilance.
Pages 86-87 Questions:
1. Explain the basic operation of a de-esser and how it affects the vocal track.
2. What features does the Sonnox SuprEsser offer to audio editors for treating excessive sibilance?
3. Describe the function of the band-pass and band-reject filters in the Sonnox SuprEsser.
4. What are the three listening modes available in the Sonnox SuprEsser, and what do they do?
5. How does the de-esser ensure that listeners do not hear the filtered signal sent along the side chain?
Pages 86-87
1. What principle do de-essers work on?
A. Side chaining
B. Compression
C. Equalization
D. Reverb
2. What does the de-esser do to the original vocal track?
A. Enhances it
B. Increases volume
C. Adds reverb
D. Attenuates sibilance
3. What type of filters does the Sonnox SuprEsser use?
A. Low-pass and high-pass
B. Band-pass and band-reject
C. Notch and shelf
D. All-pass and comb
4. What does the Inside button do in the Sonnox SuprEsser?
A. Solos the original signal
B. Blends the two signals
C. Solos the output of the band-pass filter
D. Reveals the output of the band-reject filter
5. What is the purpose of the band-pass filter in the Sonnox SuprEsser?
A. To isolate problematic audio frequencies
B. To enhance the overall sound
C. To add effects
D. To compress the entire signal
Pages 87-88
1. What does the horizontal line at 9 indicate in the band-pass filter?
A. The peak level of the problematic sound
B. The threshold level
C. The center frequency of the band
D. The extent of gain reduction achieved
2. What does the gain-reduction meter show?
A. The input signal
B. The extent of gain reduction achieved
C. The peak level of sound
D. The center frequency of the band
3. How can the user set the upper and lower limits of the band-pass filter?
A. By dragging the vertical lines
B. By clicking on the graph
C. By adjusting the threshold level
D. By using the gain-reduction meter
4. What does the vertical line at 6 represent?
A. The threshold level
B. The peak level of sound
C. The frequency with the greatest energy
D. The extent of gain reduction
5. What happens when the automated threshold follows the general level?
A. Peak reductions remain the same
B. The gain-reduction meter resets
C. The band-pass filter is disabled
D. The input signal is muted
Chapter 9
Page 89-90 Figure 9.1
1. What do plugins of the reflection-simulation type primarily generate?
A. Spatial effects
B. Direct sound
C. Pre-delay
D. Decay time
2. What is pre-delay in the context of room reverberation?
A. The time gap between direct sound and processed sound
B. The time gap between early and late reflections
C. The time gap between direct sound and late reflections
D. The time gap between dry sound and wet sound
3. What do smaller intimate halls typically have for pre-delays?
A. 15-18 ms or less
B. 30-40 ms
C. 10-15 ms
D. 20-25 ms
4. What does Sonnox's Oxford Reverb allow users to set regarding the simulated room?
A. Shape and size
B. Only size
C. Only shape
D. Only decay time
5. What is the purpose of the pre-delay function in a plugin?
A. To define when reflections interact with direct sound
B. To adjust the decay time
C. To control the room size
D. To equalize the sound
*PAGE 90
1. What features does Sonnox's Oxford Reverb offer for controlling early reflections?
2. How does the 'Taper' control in Oxford Reverb affect the sound?
3. What is the purpose of the 'Absorption' control in the Oxford Reverb plugin?
4. What controls are available for shaping the reverb tail in Oxford Reverb?
5. How does increasing the 'Feedback' control in Oxford Reverb affect the sound?
Page 91 Figure.2
Questions:
1. What is the purpose of the EQ section in plugins like Oxford Reverb?
A. To enhance low frequencies
B. To emulate natural frequency response of physical spaces
C. To increase high-frequency content
D. To reduce overall volume
2. Why do rooms absorb high frequencies more easily than low frequencies?
A. Because high frequencies are louder
B. Because low frequencies travel further
C. Because of the physical properties of sound
D. Because high frequencies are less desirable
3. What effect does artificial reverberation have on a dry signal?
A. Makes it sound brighter
B. Makes it sound darker
C. Has no effect
D. Makes it sound more natural
4. What can be done to mitigate the effect of undesirable low-end content in reverb?
A. Increase high frequencies
B. Apply judiciously applied EQ
C. Add more reverb
D. Reduce the volume
5. What is one way to create greater warmth in a reverb sound?
A. Boost high frequencies
B. Apply a gentle boost to lower frequencies
C. Reduce the reverb time
D. Increase the dry signal level
*PAGE 91
1. What role does 'Overall Size' play in creating the aural image of space in reverberation?
2. How does 'Dispersion' affect the reflections in reverberation?
3. What is the effect of 'Phase Difference' on the stereo sound field?
4. Why is EQ used in reverberation plugins, and what is its effect on high frequencies?
5. What adjustments can be made to lower frequencies in reverberation to enhance warmth?
Page 92 Figure.3 True/False
1. Does the Nimbus GUI have three main panes?
2. The Pre-delay parameter in Nimbus helps to decrease the clarity of the signal.
3. The Low-Mid Balance dial adjusts the reverb time for lower frequencies only.
4. In larger spaces, reverberation lasts longer in lower frequencies according to the document.
5. Damping controls the way the highest frequencies are affected in the reverb.
Page 92-93 F Figure 9.4
1. What is the purpose of the Pre-delay in reverb settings?
2. How does the Reverb Size affect the sound of the reverb?
3. What does the Damping Frequency knob control in reverb settings?
4. What is the effect of the Width control on reverb?
5. What is the function of the Tail Suppress feature in reverb settings?
Page 94 Figure 9.6
1. What are the three general types of reverb that engineers can select from in the Attack pane?
2. How does the Diffuser Size knob affect the reverb settings?
3. What is the purpose of the Envelope Attack control in the plugin?
4. What does the Envelope Time control do in the plugin?
5. How does the Envelope Slope controller affect the signal as it enters the plugin?
Page 94 Figure 9.5 ( F 9.6)
1. What does the Output pane contain for controlling the reverb settings?
A. Dials for controlling level and EQ of early and late reflections
B. Only a selection of filters for input signal
C. A single dial for overall reverb control
D. A visual representation of sound waves
2. What types of reverb can engineers select in the Attack pane?
A. Plate, chamber, and hall
B. Room, hall, and echo
C. Chamber, echo, and delay
D. Plate, room, and chamber
3. What does the Diffuser Size knob model?
A. The dimensions of irregularities on reflective surfaces
B. The overall volume of the reverb
C. The type of reverb used
D. The speed of sound in the room
4. What does the Envelope Attack control?
A. The way the signal enters the plugin
B. The overall reverb time
C. The type of reflections
D. The frequency of the input signal
5. What effect does a gradual slope have on the Envelope Slope controller?
A. Filters the later energy quite strongly
B. Increases the early reflections
C. Decreases the overall reverb time
D. Has no effect on the signal
Page 95 Figure 9.7 True/False
1. The Early subpage button allows users to adjust parameters related to early reflections.
2. Smaller values on the Early Attack dial produce weaker early reflections.
3. The Early Time knob adjusts the length of time over which early reflections are spread.
4. The Early Slope dial models air absorption through a high-pass filter.
5. Early Pattern allows the user to choose between five distinct groupings of early reflections.
Page 96-97 Figure 9.8
1. What are the three main functions of the dials in the Warp pane?
2. How does the Threshold dial affect the compressor's processing?
3. What is the purpose of the Cut button in the Warp pane?
4. What does the Attack dial control in the compressor?
5. What options are available for changing the bit depth in the Warp pane?
Page 97-98 Figure 9.9 , 9.10 True/false
1. The Verb Session plugin's GUI is divided into three main areas.
2. The time-structure display shows the main parameters of reverberation.
3. The initial early reflections are represented by a group of horizontal bars.
4. The solidly colored area in the time-structure display depicts the dense wash of late reflections.
5. The plugin generates frequency bands that are shown by the decay curves in the time-structure display.
Page 99 Figure 9.11 True/False
1. Sonnox includes early-reflection controls for width, taper, feed along, feedback, and absorption.
2. The Width control in Sonnox affects the loudness of the reflections.
3. Absorption models the amount of high-frequency reduction that can occur in a room.
4. Reverb Time sets the length of time in seconds that it will take for the tail to fade to silence.
5. Diversity alters the width of the reverb's stereo image.
Page 99 Figure 9.11
1. What does the Room Size knob control in the plugin?
A. The volume of the room in cubic meters
B. The frequency of the audio
C. The gain of the early reflections
D. The length of the reverb tail
2. What is the suggested Pre-Delay value for the Musikvereinssaal?
A. 20.1 ms
B. 15 ms
C. 12 ms
D. 25 ms
3. What does the Decay Time knob specify?
A. The volume of the room
B. The length of the reverb tail
C. The frequency of the audio
D. The gain of the early reflections
4. What effect do longer pre-delays have on sound?
A. They make the sound louder
B. They help listeners distinguish between direct and reflected sound
C. They reduce the reverb effect
D. They increase the volume of the source
5. What is the default setting for the Damping sliders?
A. 50%
B. 75%
C. 100%
D. 125%
Page 100 Figure 9.12-13 True/false
1. Engineers can alter the tonal quality of the reverb through the filter pane in the upper right corner of the GUI.
2. The filter pane has only one adjustable band for frequency spectrum adjustments.
3. The Input slider determines the level of the signal that exits the plugin.
4. The Dry/Wet knob designates the mix of untreated and treated signal that leaves the plugin.
5. The default setting of the Dry/Wet knob is 50%.
Page 101 Figure 9.14
1. What type of controls does FabFilter focus on in Pro-R?
A. Technical controls
B. Non-technical controls
C. Complex controls
D. Basic controls
2. What does the Distance knob in Pro-R replicate?
A. The effect of moving closer or farther from the sound source
B. The effect of changing the reverb type
C. The effect of adjusting the mix level
D. The effect of altering the stereo width
3. What does the Brightness knob control in Pro-R?
A. The balance between high and low frequencies
B. The overall volume of the reverb
C. The decay rate of the reverb
D. The stereo width of the sound
4. What feature does Pro-R use instead of a crossover system for modifying decay rates?
A. Decay-rate EQ
B. Parametric EQ
C. Dynamic EQ
D. Static EQ
5. What does the Character control in Pro-R alter?
A. The style of the reverb
B. The overall volume of the reverb
C. The stereo width of the sound
D. The decay time of the reverb
Page 102 Figure 9.15-16
1. What is the principle behind convolution in reverb plugins?
2. How do engineers create impulse responses for convolution reverb?
3. What challenges do convolution reverbs face regarding impulse responses?
4. What unique approach does EastWest's Spaces reverb take in capturing impulse responses?
5. What controls do users have access to in the GUI of EastWest's Spaces II?
Page 102 Convolution
1. What does the term 'convolution' refer to in the context of audio processing?
A. The blending of one signal with another
B. The recording of sound in a studio
C. The process of deconvolution
D. The simulation of sound in a digital format
2. What is used to create impulse responses for reverb plugins?
A. A series of discrete measurements
B. Previously recorded impulse responses
C. Room reflections from an initial stimulus
D. A full-range frequency sweep
3. What is a common issue with convolution reverbs if impulse responses are not created carefully?
A. They can sound too natural
B. They can result in sterile simulations
C. They require less processing power
D. They do not use room reflections
4. How does EastWest's Spaces reverb differ from traditional convolution reverbs?
A. It uses generalized responses of a space
B. It focuses on specific instruments' reverberation characteristics
C. It does not require impulse responses
D. It eliminates the need for mathematical calculations
5. What can users adjust in the GUI of Spaces II?
A. The type of audio input
B. The length of the pre-delay
C. The number of impulse responses
D. The type of room reflections
Chapter 10
Page 105 true false
1. A container holds digital audio information along with metadata.
2. Lossless codecs achieve data compression by removing information that humans do not hear well.
3. WAV and AIFF are both standard uncompressed file types for audio.
4. FLAC is a lossy audio codec that compresses data without any loss of audio quality.
5. ALAC uses an m4p audio-only container and compresses audio data with no loss of information.
Page 106 true false
1. The mp3 codec was developed by the Fraunhofer Institute in collaboration with other institutions.
2. AAC is a replacement for the mp3 codec and was launched by Apple in 2003.
3. Vorbis is a patented codec that is used in the Ogg container format.
4. The mp3 codec compresses audio data to occupy about 50% of the original file's storage space.
5. AAC and ALAC files both use the extension m4a.
Page 105
1. What is the primary function of a container in digital audio files?
2. What are the two types of compression methods used by codecs?
3. What is the difference between FLAC and ALAC in terms of their container formats?
4. What is the significance of the Free Lossless Audio Codec (FLAC) in the audio industry?
5. What are the two standard uncompressed file types available to recordists?
Page 107
1. What percentage of the original file's storage space does an mp3 typically occupy?
A. 5%-6%
B. 9%-10%
C. 15%-20%
D. 25%-30%
2. In what year was the mp3 codec named?
A. 1989
B. 1991
C. 1995
D. 2003
3. What is the main problem with the perceptual coding used to create an mp3?
A. Loss of sound quality
B. Increased file size
C. Compatibility issues
D. Limited playback devices
4. What audio container does AAC employ?
A. m4a
B. mp3
C. wav
D. m4p
5. Which organization created the Vorbis codec?
A. Fraunhofer Institute
B. Xiph.Org Foundation
C. Apple Inc.
D. Moving Picture Expert Group
Page 107-108 True /false
1. The International Telecommunication Union (ITU) introduced algorithms for measuring loudness in 2006.
2. The European Broadcasting Union (EBU) proposed a metering system four years after the ITU introduced its algorithms.
3. The EBU recommended that the loudness level should be constant and uniform within a program.
4. Loudness normalization ensures that the average loudness of all programs is the same.
5. The ITU suggests that subjective loudness is not important to the music industry.
Page 107
1. What are the file sizes of the original WAV file and the converted WAV file according to Table 10.1?
2. What is the definition of loudness as defined by the Advanced Television Systems Committee (ATSC) in 2013?
3. What changes did the International Telecommunication Union (ITU) introduce in 2006 regarding loudness measurement?
4. What was the purpose of the European Broadcasting Union's (EBU) proposal four years after the ITU's algorithms were made available?
5. What types of audio files were produced for Monteverdi's 'Si dolce e'I toronto' and who produced them?
Page 108-109
1. What are the three time scales used in digital meters that conform to the EBU recommendations, and what do they measure?
2. What is the purpose of the Loudness Range (LRA) in digital meters, and how is it calculated?
3. Explain the concept of K-weighting and its significance in audio metering.
4. What are the two varieties of relative scales mentioned in the document, and how do they differ?
5. What does the absolute scale in digital meters show to recordists, and what is its target level?
Page 112 true/false
1. The EBU suggests an integrated loudness level of -23.0 LUFS.
2. The maximum permitted true peak level according to the EBU is -2.0 dB.
3. The AES promotes a target of -16.0 LUFS for streaming platforms.
4. The normalization targets for streaming platforms are lower than those for broadcast standards.
5. In the USA, the integrated loudness target is -24.0 LUFS with a tolerance of 2.0 dB.
Page 107-108
1. What organization introduced algorithms for measuring perceived and peak levels of digital audio signals in 2006?
A. European Broadcasting Union (EBU)
B. International Telecommunication Union (ITU)
C. American National Standards Institute (ANSI)
D. Institute of Electrical and Electronics Engineers (IEEE)
2. What is the maximum momentary loudness based on?
A. A sliding window of 1 second
B. A sliding window of 400 milliseconds
C. A fixed time interval of 3 seconds
D. An average of the entire signal
3. What does loudness normalization ensure according to the EBU?
A. The loudness level is constant throughout a program
B. The average loudness of all programs is the same
C. The loudness level varies significantly within a program
D. The peak loudness is always the highest
4. What is the purpose of the maximum true-peak level descriptor?
A. To measure the average loudness of a signal
B. To comply with the technical limits of digital systems
C. To assess the artistic quality of audio
D. To determine the overall energy of a program
5. Which industry has adopted the EBU standards for broadcast?
A. Film industry
B. Music industry
C. Television broadcasting
D. Radio broadcasting
Chapter 11
Page 125 Questions:
1. What are the two main approaches engineers take when recording the sound of a grand piano?
2. How does the raised lid of a grand piano affect the sound frequencies?
3. What is the significance of microphone placement in relation to the piano's sound quality?
4. What frequency range do engineers need to consider when selecting microphones for recording a grand piano?
5. Why might engineers prefer unidirectional microphones when recording a grand piano?
Page 125
1. What is the primary goal of recordists when setting up microphones for a grand piano?
A. To capture a balanced sound
B. To record only high frequencies
C. To focus on low frequencies
D. To eliminate all reflections
2. Where do engineers typically locate the higher frequencies in a stereo recording of a grand piano?
A. On the right side of the playback image
B. In the center of the stereo image
C. On the left side of the playback image
D. Behind the piano
3. What effect does the raised lid of a grand piano have on sound frequencies?
A. It amplifies lower frequencies
B. It reflects mid and high frequencies better
C. It has no effect on sound frequencies
D. It only affects the soundboard
4. What is a common issue when placing a piano too close to a wall?
A. It enhances the sound quality
B. It creates unwelcome early reflections
C. It boosts high frequencies
D. It eliminates phase anomalies
5. Which type of microphone do engineers often choose to achieve a uniform tonal quality across the piano's frequency spectrum?
A. Omnidirectional microphones
B. Unidirectional microphones
C. Dynamic microphones
D. Condenser microphones
Pages 125-126 Questions:
1. What are the three factors that engineers consider when deciding on the location for stereo miking of a grand piano?
2. What is the typical distance range for an ORTF pair of microphones from the piano?
3. Describe the A-B spaced pair technique and its typical microphone spacing.
4. What is the purpose of the mid-side coincident pair technique in recording?
5. How does the placement of microphones affect the sound captured from the piano?
Pages 125-126 True/false
1. Engineers consider tonal balance, direct to ambient sound ratio, and stereo image when deciding on microphone placement for recording a grand piano.
2. An ORTF pair of microphones should be placed at least 4 meters away from the piano.
3. The A-B spaced pair technique typically uses directional microphones.
4. The mid-side coincident pair technique allows for adjustment of the stereo image after tracking has been completed.
5. Moving microphones from the tail to the front of the piano decreases mid- and high-frequency information captured.
Page 127-128 Questions:
1. What is the purpose of placing microphones inside the piano when recording in unfavorable room acoustics?
2. What is the recommended height for positioning microphones above the strings to capture a broad spectrum of sound?
3. How do engineers typically position microphones in a spaced pair configuration for recording piano?
4. What is the effect of using coincident pairs of microphones positioned over the hammers of the piano?
5. What is the advantage of using a quasi-ORTF array with cardioid microphones in piano recording?
Pages 127-128
1. What is the main reason engineers place microphones inside the piano when recording?
A. To enhance the room acoustics
B. To minimize room reverberation effects
C. To capture external sounds
D. To increase microphone sensitivity
2. What is the recommended distance for positioning microphones above the strings to capture sound effectively?
A. 10 to 15 centimeters
B. 20 to 25 centimeters
C. 30 to 35 centimeters
D. 40 to 45 centimeters
3. In a spaced pair configuration, where is one microphone typically placed?
A. Over the bass strings
B. Over the treble strings
C. Under the piano
D. At the back of the piano
4. What is the effect of positioning coincident pairs of microphones directly over the hammers?
A. It produces a warmer sound
B. It produces a brighter, more percussive sound
C. It reduces background noise
D. It captures only low frequencies
5. What is a benefit of using a quasi-ORTF array with cardioid microphones?
A. It captures only high frequencies
B. It provides uneven sound coverage
C. It yields an acceptable sound with even frequency coverage
D. It requires more microphones
Chapter 12
Pages 129-130
1. What do many singers prefer regarding their voices before microphones capture the sound?
A. To develop in the room
B. To be recorded immediately
C. To use only spot microphones
D. To avoid any reverberation
2. What is a common strategy for microphone placement when the stage is wide enough?
A. Pointing the piano towards the audience
B. Pointing the piano towards the rear of the stage
C. Placing the singer behind the piano
D. Using only unidirectional microphones
3. What is the purpose of placing the vocalist's microphones behind the music stand?
A. To capture more ambient sound
B. To prevent reflections from entering the microphones
C. To increase the volume of the singer
D. To reduce the distance from the singer
4. What is the recommended distance for placing cardioid spot microphones from the singer?
A. 30 to 50 centimeters
B. 60 to 100 centimeters
C. 1 to 2 meters
D. 2 to 3 feet
5. What do engineers regularly employ in the method involving the piano and singer?
A. Cardioid microphones only
B. Stereo pairs of omnidirectional microphones
C. Only unidirectional microphones
D. Dynamic microphones only
Pages 129-130 Questions
1. What is the significance of the critical distance in microphone placement for recording singers and pianists?
2. How do engineers typically position the singer and piano to minimize microphone bleed?
3. What is the role of the music stand in relation to the vocalist's microphones?
4. What is the rationale behind using a spot microphone in conjunction with a main stereo pair?
5. What adjustments do recordists make to find the ideal microphone placement for the A-B pair of omnidirectional microphones?
Pages 130-131
1. What is a common issue when recording a violin and piano together?
A. The violin occupies much of the left side of the sound stage
B. The piano is too quiet
C. The microphones are too far away
D. The violinist is not positioned correctly
2. Where should the microphones be placed to capture direct sound from both the violin and piano?
A. In front of the instruments
B. Behind the violinist
C. Near the tail of the piano
D. On the left side of the stage
3. What is the recommended distance between the microphones when recording?
A. 60 to 70 centimeters
B. 1 to 2 meters
C. 2 to 3 meters
D. 3 to 4 meters
4. What type of microphones may be added if the recording lacks presence or intimacy?
A. Cardioid spot mics
B. Omnidirectional mics
C. Dynamic mics
D. Condenser mics
5. What should engineers experiment with to achieve the best blend of intimacy and room sound?
A. Height and distance in front of each instrument
B. Type of microphones used
C. Number of microphones
D. Position of the audience
Pages 130-131 Questions:
1. What challenges do engineers face when recording a violin and piano together in a typical concert arrangement?
2. How does the positioning of the violinist affect the recording quality?
3. What is the significance of the critical distance in recording?
4. What alternative arrangement do some recordists use when recording a violin and piano?
5. What role do cardioid spot microphones play in the recording process?
Page 131 True-false
1. Cellists usually sit in the curve of the piano during concerts.
2. An A-B pair of unidirectional microphones is used to capture the sound of the cello and piano.
3. Supplementary cardioids are added to capture a more intimate cello sound.
4. The cellist should always face away from the piano to reduce leakage from the larger instrument.
5. The principles of microphone placement apply only to the cello and violin.
Pages 131-132
1. What is the primary frequency range that radiates through the unstopped finger holes of clarinets and oboes?
A. Below 1.0 kHz
B. Above 5.0 kHz
C. Between 8.0-10.0 kHz
D. Below 3.0-4.0 kHz
2. Where do engineers often place a spot microphone for clarinets?
A. In front of the instrument
B. Inside the bell
C. At the embouchure hole
D. On the lower hand of the player
3. What is a common issue when placing a microphone at the embouchure hole of a flute?
A. It does not pick up high frequencies
B. It captures too much bass
C. It can sound overly breathy
D. It causes feedback
4. What technique do engineers often use to deal with the shortcomings of placing a mic at the embouchure hole?
A. Place the mic closer to the player
B. Use a dynamic microphone
C. Position the mic higher
D. Use a condenser microphone
5. What principle should recordists observe to avoid phase cancellation between microphones?
A. The two-to-one principle
B. The three-to-one principle
C. The four-to-one principle
D. The five-to-one principle
Page 132 Questions:
1. What is the primary way brass instruments, such as trumpets and trombones, radiate sound?
2. Why do many recordists prefer to place microphones at least a meter in front of trumpets and trombones?
3. What microphone types do recordists often choose for capturing the sound of trumpets and trombones, and why?
4. How does the bell position of the French horn affect microphone placement?
5. What is the effect of on-axis microphone placement compared to off-axis placement for brass instruments?
Page 132 True-false
1. Brass instruments radiate sound from their bells, with higher frequencies propagating directly backward.
2. Recordists prefer to place microphones at least a meter in front of trumpets and trombones to achieve a balanced frequency spectrum.
3. Microphone choices for brass instruments include only dynamic microphones.
4. The bell of the French horn typically faces towards the audience.
5. On-axis microphone placement produces a warmer tonal quality compared to off-axis placement.
Chapter 13
Page 135
Questions:
1. What is the typical seating arrangement for a string quartet?
A. Two violins on the left, cello and viola on the right
B. Cello and viola on the left, two violins on the right
C. All instruments in a straight line
D. Two violins in the back, cello and viola in the front
2. What is the height range for the microphones when placed in the center of the quartet?
A. 1 to 2 meters
B. 3 to 4 meters
C. 5 to 6 meters
D. 7 to 8 meters
3. What technique do engineers use to create a sense of depth perspective in a string quartet recording?
A. Spot miking
B. A-B pair of omnis
C. Decca tree
D. Close miking
4. What can negatively impact the tonal quality when using microphones on the floor?
A. Wind interference
B. Reflections from the floor
C. Background noise
D. Improper mic placement
5. How far apart should the two omnis be spaced when placed in the center of the quartet?
A. 10 to 20 centimeters
B. 20 to 30 centimeters
C. 30 to 40 centimeters
D. 40 to 50 centimeters
Page 135 trio true-false
1. When recording a piano trio, engineers typically use a single A-B array of omnidirectional microphones in front of the performers.
2. The microphones are usually positioned closer to the piano than to the violin and cello.
3. Recordists may use spot microphones close to the piano to increase its presence in the mix.
4. In some recording scenarios, the violinist and cellist face away from the piano.
5. A main A-B pair of omnidirectional microphones is used to give the overall sonic impression of the ensemble.
Page 136 true false
1. Depth perspective is not an important consideration when miking choirs.
2. Engineers must consider the distance discrepancy between the microphones and the front and back rows of the choir.
3. Placing the microphone array within a meter or two of the choir results in a drier quality of sound.
4. A distant microphone location may enhance the blended texture projected by the choir ensemble.
5. Bass frequencies reverberate in concert halls as strongly as higher frequencies do.
Chapter 14
Pages 137-138 Questions:
1. What microphone technique is used by Simon Eadon in his recordings of Marc Andr© Hamelin?
2. Describe the microphone setup used in the video made at the University of Surrey's Institute of Sound Recording for capturing Ravel's Sonatine, No. 2.
3. What is the size of Henry Wood Hall, and how does Eadon capture its acoustic?
4. What additional equipment did the engineer use to augment the room's ambience during the recording session?
5. What was the purpose of raising the lid of the piano higher than usual during the recording session?
Page 138
Questions:
1. Who recorded Winona Zelenka playing the Bach cello suites?
A. Ron Searles
B. Joseph Guarnerius
C. Studer A80
D. Royer
2. What type of microphones did Ron Searles choose for the recording?
A. Dynamic microphones
B. Condenser microphones
C. Royer ribbon microphones
D. Lavalier microphones
3. What was the distance of the center microphone from the cello?
A. 1 to 1.25 meters
B. 2 to 2.5 meters
C. 3 to 4 feet
D. 5 to 6 feet
4. What recording machine was used for the A-B pair?
A. Studer A80
B. Royer R-122
C. Decca tree
D. Analog tape machine
5. What effect did the figure-8 polar pattern of the ribbons have on the recording?
A. Increased volume
B. Reduced phase problems
C. Enhanced treble frequencies
D. Decreased reverb
Page 139-140
1. Where was the album 'An die Musik' recorded?
A. Isabel Bader Centre for the Performing Arts
B. Daen Fish Studios
C. Queen's University
D. Acoustically isolated performance hall
2. What type of microphones were used to capture the direct sound from the piano?
A. Neumann KM 84 cardioid mics
B. Beyerdynamic M 130
C. AKG C480s
D. Faulkner phased array
3. What was the purpose of the baffle between the instruments?
A. To enhance the piano sound
B. To reduce spill
C. To amplify the double bass
D. To capture ambient sound
4. What type of microphones were used as spot microphones for the double bass?
A. Neumann KM 84
B. Beyerdynamic M 130
C. AKG C480s
D. Faulkner phased array
5. What effect did Wolpen's approach have on the recording of the double bass and piano?
A. It made the recording sound distant
B. It allowed for a transparent interaction
C. It emphasized the piano over the bass
D. It created a muffled sound
Chapter 15
Page 143
1. What is the main focus of the chapter discussed in the document?
A. Analyzing the works of contemporary composers
B. Recording modern pop music
C. Exploring the history of music technology
D. Achieving a period-style interpretation of an early eighteenth-century cantata
2. Which piece of music is specifically mentioned in the document?
A. Symphony No.
B. 5Amor, sorte, destine
C. Clair de Lune
D. The Four Seasons
3. Who were the performers featured in the recording discussed in the document?
A. Daniel Thomson and Thomas Leininger
B. Robert Nation and Kyle Ashbourne
C. Tomaso Albinoni and Daniel Thomson
D. Thomas Leininger and Robert Nation
4. Where was the recording of 'Amor, sorte, destine' produced?
A. EMAC Recording Studios in London, Canada
B. Abbey Road Studios in London, UK
C. Capitol Studios in Los Angeles, USA
D. Sunset Sound Studios in Hollywood, USA
5. What modern technology is mentioned as being used to simulate historic acoustics?
A. Auto-tune software
B. Digital synthesizers
C. Convolution reverbs
D. Loop pedals
Page 143 Questions
1. What challenges do musicians face when trying to perform music from the sixteenth to nineteenth centuries in modern settings?
2. How does modern studio technology help in achieving a historically informed performance?
3. What specific recording project is discussed in the chapter, and who were the key contributors?
4. What was the focus of the pre-production stage of the recording project?
5. What is the significance of the acoustic design mentioned in the chapter?
Page 144
1. What is required for people interested in historical performance to recover old methods?
A. Reconstruct practices from surviving sources
B. Use modern interpretation techniques
C. Follow contemporary musicians
D. Ignore historical documents
2. What does Daniel use to re-create the natural style of performance?
A. Modern singing techniques
B. Rhetorical delivery techniques
C. Electronic music
D. Traditional instruments
3. Which principle represents a noticeable departure from modern practice according to the document?
A. Use of electronic effects
B. Singing in unison
C. Highly articulated phrasing
D. Strict adherence to written scores
4. What did singers of the past insert to compartmentalize thoughts and emotions?
A. Musical interludes
B. Grammatical and rhetorical pauses
C. Instrumental solos
D. Vocal harmonies
5. According to Vicentino, what has a great effect on the soul?
A. Tempo fluidity
B. Loudness of voice
C. Use of instruments
D. Length of performance
Page 144 Questions:
1. What are the older principles of interpretation in historical performance compared to modern practices?
2. How does Daniel's performance style reflect historical practices?
3. What techniques does Daniel employ in his performance of 'Amor, sorte, destine'?
4. What role do pauses play in historical singing practices according to the document?
5. What did Francis Clement explain about the rationale behind adding unrotated pauses in singing?
Page 145
1. What is required for singers to convey emotions effectively according to the document?
A. Use a single vocal timbre
B. Differentiate their registers
C. Sing in a monotone voice
D. Avoid emotional expression
2. What did David Ffrangcon-Davies criticize in 1905?
A. Emotional delivery in singing
B. Versatile tonal palette
C. Monochromatic approach to timbre
D. Use of rhetorical pauses
3. What does accent denote in the context of singing?
A. Stress on a single syllable
B. Force of voice on a word
C. Emotional expression
D. Pacing of delivery
4. What is the purpose of compartmentalization in singing?
A. To emphasize all words equally
B. To create a monotone delivery
C. To avoid emotional expression
D. To organize and pace ideas
5. What do emphatic words receive within a sentence?
A. The greatest force
B. No emphasis
C. Equal stress
D. Monotone delivery
Page 145 Questions:
1. What is the significance of vocal timbres in conveying emotions according to the document?
2. How did David Ffrangcon-Davies view the tonal palette in singing during his time?
3. What is the purpose of inserting grammatical or rhetorical pauses in singing as described in the document?
4. What is the difference between accent and emphasis in singing as explained in the document?
5. How does the application of accent and emphasis contribute to the delivery of complex ideas in singing?
Page 146
1. What did Daniel create after analyzing the song's ten using the discussed principles?
A. A dramatic spoken reading of the poem
B. A musical score
C. A historical document
D. A vocal exercise
2. How did singers in earlier times view their role in performance?
A. As simple interpreters
B. As re-creators
C. As composers
D. As critics
3. What did composers of the past notate in their music?
A. Subtleties of rhythm and dynamics
B. Detailed vocal techniques
C. Emotional expressions
D. Historical context
4. Who commented that some methods of singing cannot be written down?
A. Domenico Con'i
B. Nicola Vicentino
C. Andreas Omithopardtus
D. Charles Avison
5. What did Manuel Garcia suggest about performers in 1857?
A. They should sing exactly as noted
B. They should focus on historical accuracy
C. They should avoid personalizing songs
D. They should alter pieces to enhance their effect
Page 146 Questions:
1. What process did Daniel follow to create a dramatic spoken reading of the poem after analyzing the song's ten?
2. How did singers in earlier times differ in their approach to performing music compared to modern singers?
3. What did composers of the past typically notate in their music, and what was the implication of this practice?
4. What did Nicola Vicentino and Andreas Omithopardtus express about the limitations of musical notation?
5. What was the perspective of Domenico Con'i regarding the performance of music as noted in 1781?
Page 146-147 true false
1. Nicola Vicentino suggested that changing tempo can greatly move the audience during an oration.
2. Vocalists sang 'piano e forte' and 'presto e tardo' only to conform to the ideas of the composer.
3. Domenico Corri proposed that singers should deliver some phrases in quicker or slower time to emphasize particular words.
4. Daniel varied tempo in his performances to avoid being considered 'inexpressive' and 'uncouth'.
5. John Addison referred to Daniel's performance as lacking 'finish' to the song.
Pages 147-149
1. What was the main goal in producing 'Amor, sorts, destine'?
A. To enhance period interpretation through modern studio practices
B. To create a live performance without recording
C. To focus solely on historical performance
D. To eliminate the use of microphones
2. What microphone was used for Daniel's vocals?
A. DPA 4006A
B. AKG 48085
C. Royer R122
D. Milab DC196s
3. How were the microphones for the harpsichord arranged?
A. In a single line
B. In an AB arrangement
C. In a circular pattern
D. Randomly placed
4. What was the recording format used by Robert Nation?
A. 8 bits/48 kHz
B. 16 bits/44 kHz
C. 24 bits/96 kHz
D. 32 bits/192 kHz
5. What type of microphones were chosen for the close miking of the harpsichord?
A. Omnidirectional
B. Unidirectional
C. Dynamic
D. Condenser
Pages 147-149 Questions:
1. What were the main production techniques used in 'Amor, sorts, destine' to enhance period interpretation?
2. How did the roles of music director and producer benefit the project?
3. What considerations were made regarding microphone selection for the harpsichord?
4. What was the process followed after recording several takes of the cantata?
5. What was the significance of using Pro Tools IID for tracking?
Pages 149-151
1. What was the main goal of the editing process for 'Amor, some, destine'?
A. To add more reverb
B. To increase the volume of the recording
C. To achieve a historically informed conception
D. To create a live performance feel
2. Which feature was used to decrease jack noise in the recording?
A. Spectral repair feature of iZotope Rx Advanced
B. Highpass filter in Universal Audio's Massenburg DesignWorks MDWEQ5
C. Compression in Pro Tools
D. Dynamic equalizer bx_dynEQ V2
3. What type of ambience did Robert design for the recording?
A. Ambience of a concert hall
B. Natural ambience of a large church
C. No ambience at all
D. Artificial ambience approximating small rooms
4. What was the decay time set for the large Hall B reverb in the vocal track?
A. 837 milliseconds
B. 1.7 seconds
C. 2.0 seconds
D. 3.0 seconds
5. What was used to alleviate mild harshness at loud moments in the recording?
A. Dynamic equalizer bx_dynEQ V2
B. Highpass filter in McDSP's Filterbank F202
C. Compression back bus
D. Artificial ambience
Pages 149-151 true false
1. The editing process aimed to enhance the visual connection of live performance.
2. Kyle Ashbourne was the assistant engineer involved in the recording process.
3. The mixing sessions focused on three main elements: reverb, compression, and EQ.
4. The Lexicon 224 reverb was set to a decay time of 2.0 seconds for both the vocal and harpsichord tracks.
5. The recording aimed to replicate the experience of listeners seated 3-4 meters from the performers.
Приложение 1 – Глоссарий
''Active" device—one that requires external power to function.
Aliasing—sampling rates that are too low to map a waveform accurately prohibit the faithful restoration of signals. A fault known as aliasing occurs when too few samples cause a device lo interpret the voltage data as a waveform different from the one originally sampled.
Amplitude- a measure of the change of air pressure in a soundwave above normal (compression) and below normal (rarefaction). In other words, it is a measure of the strength of a sound without reference to its frequency. We perceive amplitude as loudness and express it in decibels (dB) of sound pressure level (SPL),
Analog audio---the representation of a signal by continuously variable and measurable physical quantities, such as pressure or voltage.
Analog-lo-digital converter (ADC)—converts analog signals to digital code using pulse code modulation (PCM).
Audio volume - in relation to the measurement of loudness, "audio volume" refers to a subjective combination of level, frequency, content, and duration.
Bit - an abbreviation of the expression "binary digit." Binary means something based on or made up of two things, and in digital audio systems, these two things are the numbers 0 and 1.
Bit depth - stipulates how many numbers (0 or 1) are used to represent each sample of a waveform.
Bit rate - indicates how many bits are transmitted per unit of time in digital audio.
Byte - a group of eight digits, each digit being either 0 or 1.
Codec—an abbreviation of coder/decoder, a codec is a software application using algorithms to encode a digital signal into another formal, often so reduce the size of the file (hence the term compression). Once encoded, the. file must be decoded to re-create the original audio if no information is lost in the process, the codec is called lossless, but if information has been removed, the codec is a lossy one.
Coloration - an audible change in the quality (timbre) of a sound.
Comb filtering — the short delays between two or more microphones used to capture a complex waveform can cause comb filtering, a set of mathematically related (and regularly recurring) cancellations and reinforcements in which the summed wave that results from the inadvertent cutting and boosting of frequencies resembles the teeth of a comb.
Complex soundwave—a waveform comprised of a collection of sine waves, integer multiples of the fundamental frequency, that is, a complex set of frequencies arranged in a harmonic or overtone series above the lowest frequency of (lie spectrum.
Condenser microphone—a mic that operates electrostatically. Its capsule consists of a movable diaphragm and a fixed backplate, which form the two electrodes of a capacitor (previously tailed a condenser; hence, the name) that has been given a constant charge of PC voltage by an external power source. As soundwaves strike the diaphragm, the distance between the two surfaces changes, and this movement causes the charge-carrying ability (capacitance) of the structure to fluctuate around its fixed value. The resulting variation in voltage creates an electrical current that corresponds to the acoustic soundwave.
Convolution — the blending or convolving of one signal with another. Convolution is (he method used to create reverb plugins based on impulse responses.
Critical distance (reverberation radius) - the distance from a sound source to that point in an
enclosed space where the direct and reverberant fields are equal in level; that is, the total energy of one equals the other. Physicists have determined that the level of direct sound drops by 6.0 dB for every doubling of distance in a truly free field (that is, outdoors; the drop is somewhat smaller in an enclosed space) and that the level of reverberated sound remains more or less constant everywhere in a room. 'The ratio of direct to reverberated sound is 1:1 at the critical distance .
The critical distance may be found in any room by at least two methods: (1) place a microphone relatively far from a sound source and then move a second mic increasingly closer to the source until the difference in level between the two microphones is less than 3,0 dB; (2) either during a rehearsal or by situating a boom box at the performers' location (set between stations to produce white noise), recordists use an SPL meter to measure the SPL close to the source (approximately 30 centimeters or a foot away) and then double the distance and measure again, a point at which, according to the inverse Square Law, the level will have decreased between 4.0 and 6.0 dB. After making note of the new level, they double the distance and take another measurement, repeating this procedure until the SPL stops dropping. By moving back to the area where the level began to remain constant, they find the critical distance.
At distances less than a third of the reverberation radius, the direct sound will be at least 10.0 dB stronger than the reverberated sound; hence, reverberation does not play a prominent role in the sound captured by a microphone. Conversely, at distances three times that of the critical distance, the direct sound is at least 10.0 dB weaker than the reverberated sound, and a micro-phone will primarily capture reverberation.
Damping—a method of controlling the way frequencies die away or roll of! in a reverb tail created during digital reflection simulation.
dBFS (dB Full Scale)—audio level in decibels referenced to digital full scale, that is, referenced So the clipping point ("full scale") in a digital audio system. 0.0 dB represents the maximum level a signal may attain before it incurs clipping,
dBTP (dB True Peak)—maximum inter-sample peak level of an audio signal in decibels referenced to digital full scale, that is, referenced to the clipping point ("full scale") in a digital audio system, 0.0 dB represents the maximum level a signal may attain before it incurs clipping.
Decibel (dB)—one tenth of a bel. Named after Alexander Graham Bell, a bel expresses the loga-rilhmic relationship between any two powers. In acoustics, large changes in measurable physical parameters (pressure, power, voltage) correspond to relatively small changes in perceived loudness. Thus, linear scales, because of the huge numbers involved, do not correspond very well to the perceived sound, so a logarithmic scale is used to bring the numerical representation of perceived loudness and the numerical representation of the actual physical change into line with- each other. Logarithms are a simple way of expressing parameters thai vary by enormous amounts with smaller numbers (in other words, a large measurement range is scaled down to a much smaller and more easily usable range).
Because the human ear accommodates a large range of loudness, it is convenient to express loudness logarithmically in factors often. The entire range of loudness can be expressed on a scale of about I 20.0 dB (0.0 dB is defined as the threshold of hearing), and within this logarithmic scale, increasing the intensity of sound by a factor of 10 raises its level by 10.0 dB, increasing it by a factor of 100 raises the level by 20.0 dB, and increasing it by a factor of 1,000 raises the level by 30.0 dB, and so on. Hence, the term decibel does not represent a physical value. It is a relative measurement based on the internationally accepted standard that 20 micropascals of air pressure equals 0.0 dB (20 micropascals of air pressure at 1,000 Hz is the threshold of hearing for most people).
Since the term decibel expresses a ratio and not a physical value, it can be applied to things other than loudness. In amplifiers, for example, a 200 watt, amp is 1 bel or 10 decibels more powerful than a 20 watt amp, but it is important to understand that even though a 200 watt amp puts out ten times more electrical power than a 20 watt amp, it does not generate ten times more loudness, for a ten-fold change of electrical power is only perceived by the human ear as a 10.0 dB change of loudness. In other words, the underlying scales are different, and one scale should not be equated directly with the other. This can be shown in a graph, where the vertical axis represents dB and the horizontal axis represents electrical power (the curved line in Figure Glossary.2 is the logarithmic contour).
Diaphragm—the thin membrane in a microphone capsule that moves in reaction to soundwaves In the early clays of capacitor microphones, diaphragms were made from PVC (polyvinyl chloride, such as in the M7 capsule designed by Georg Neumann in 1952), but now they are usually made from PF (polyethylene), which is lighter, .thus providing a more responsive capsule with greater sensitivity and articulation. Manufacturers fashion these materials into thin sheets coated with a gold surface so that the diaphragm may be charged to create a capacitive effect (hence, the term "metal film"). In the most expensive capsules, the gold is evaporated onto the membrane in a vacuum chamber to ensure uniform coverage. The more economical process of sputtering or spraying gold onto the membrane, the process used on less expensive microphones,, can result in an uneven coat that causes membrane imbalance and inconsistencies in the capsule's response. In ribbon mics, the diaphragm consists of a thin strip of corrugated aluminum.
Diffuse or reverberant sound field—the area in a room in which reflections from the walls, ceiling, floor, etc. predominate (that is, the ensemble of reflections in an enclosed space). In other words, the sounds arrive at the listening position/microphone randomly from all directions and the direct sound no longer dominates. These reflections have their high-frequency content attenuated by surface absorption, as well as by the air, and reach the listener/microphone at oblique angles of incidence, which causes further high-frequency loss. This is also the field in which direct sound travels as a plane wave (in plane waves, intensity decreases in a linear relationship to distance traveled, that is, the Inverse Square Law no longer applies).
Digital audio—the use of a series of discrete binary numbers (0 or I) to represent the changing voltage in an analog signal.
Digital-to-analog converter (DAC)—converts digital code to an analog signal (voltage), so that non-digital systems can use the information.
Direct or free sound field—sound arriving perpendicularly to the listening position or the diaphragm of a microphone without reflections (a purely direct field can exist only where sound propagation is undisturbed, such as in an open space free from all reflections) The direct path of sound is the shortest route from the sound source to the listening position. It is also the area in which soundwaves propagate spherically and the Inverse Square Law applies. This field ends where the sound pressure level ceases to fail by 6.0 dB for every doubling of the distance.
Distance factor—an indication of how far recordists can locate a directional microphone from a sound source and have it exhibit the same ratio of direct-to-reverberant sound pickup as an omnidirectional microphone.
Dither—the technique of adding specially constructed noise to a signal before its bit depth is reduced to a lower level. It alleviates the negative effects of quantization by replacing nonrandom distortion with a far more pleasing random noise spectrum. One of the commonly used types of dither is TPDF (triangular probability density function, which uses white noise with a flat frequency spectrum), but devices can also add noise containing a greater amount of high frequency content (called blue noise). The process involving blue noise is known as colored/ shaped noise dithering or noise shaping, and it concentrates the noise in less audible frequencies (generally those above 15-16 kHz), while reducing the level of the noise in the frequency range humans hear best (between 2 and 5 kHz and around 1 2 kHz).
Ducking—the technique of dropping one signal below another. It is frequently used in voiceovers to place the main signal in the background while the announcer speaks.
Dynamic microphone—these microphones operate on the principle of electromagnetic induction. A light diaphragm connected to a finely wrapped coil of wire suspended in a magnetic field moves within that magnetic field to induce an electrical current proportional to (he displacement velocity of the diaphragm. Dynamic microphones are also called velocity or moving-coil microphones.
Dynamic range—the difference between the softest and loudest sound a system can produce.
Early reflections—the first reflections to arrive at a listening position within 80 ms of the direct sound.
Equal (loudness curves—these curves or contours, originally established by the researchers Harvey Fletcher and Wilden A. Munson in the 1930s (and refined by later researchers), show how loudness affects the way humans hear various frequencies. Figure Glossary.3 demonstrates that people exhibit the greatest sensitivity to frequencies around 4 kHz and the least sensitivity at either end of the spectrum, particularly in the lower part of the hearing range. In other words, for listeners to perceive a 50 Hz sound in the same way they perceive a 1 kHz sound at 40.0 dB, the level of the signal has to be increased to 70.0 dB (in the chart, follow the 40 phon contour from 1 kHz up to the 70.0 dB level and the frequency below this point is roughly 50 Hz).
Fast Fourier transform (FIT)—a type of mathematical analysis, first developed by lean Baptiste Joseph Fourier in the early nineteenth century, that allows .data from one domain to be transformed into another domain. Computers perform the Fourier transform (that is, the mathematical calculations for it) at a very high speed, and modern spectrograms rely on what is known as the "fast Fourier transform" to plot frequency against amplitude in real time so that the visual representation of a signal changes as rapidly as the signal itself.
Filter- any device that alters the frequency spectrum of a signal by allowing some frequencies to pass, while attenuating others. Filters change the balance between the various sine waves that constitute a complex waveform.
Free sound field—see Direct or free sound field.
Frequency—a measure of how often ("frequently") an event repeats itself. A sound source which vibrates back and forth 1,000 times per second has a frequency of 1,000 cycles per second (cps). Frequency is now stated in hertz (FJz) instead of cps (named after Heinrich,Hertz, a German pioneer in research on the transmission of radio waves).
Frequency response—the range of frequencies that an audio device will reproduce at an equal level (within a tolerance, such as 3.0 dB). It is a way of understanding how a microphone responds to sound sources and is usually expressed in graph form, where the horizontal axis represents t re queues' and the vertical axis amplitude (in dB).
Fundamental-—the lowest frequency in a complex waveform. The fundamental is perceived as the pitch of a note.
Harmonics—see Overtone series.
Headroom—the difference between the average or nominal level of a signal (in FBI! terms, this is the target loudness) and the point at which the signal clips (0.0 dBF'S in digital systems).
Hertz (Hz)—the term used to designate frequency in cycles per second and named after the German physicist Heinrich Hertz. It was adopted as the international standard in 1948.
Impulse response (IR)—the reverberation characteristics of an ambient space. An IR is recorded using a short burst of sound (for example, a starter pistol) or a full-range frequency sweep played through loudspeakers to excite the air molecules in a room. After the sound of the stimulus has been removed from the recording (through a process known as deconvolution), the room's impulse response or reverb tail can be added to a dry signal.
Inverse Square Law—in the direct or free field (that is, in a .field free from reflections), soundwaves radiate in all directions from a source in ever-expanding spheres, and as the surface areas of these spheres increase over distance, the intensity of the sound decreases in relation to the area the soundwaves spread across (see Figure Glossary.4). The Inverse Square Law states that the intensity of a sound decreases proportionally to the square of the distance from the source. In other words, for every doubling of the distance, the sound pressure reduces by half, which the human ear perceives as a decrease of 6.0 dB (note that this principle applies only in the direct or free field; in enclosed spaces, the actual decrease is somewhat less than 6.0 dB).
K-weighting—a filter that approximates human hearing by de-emphasizing low frequencies (to make them less loud) and emphasizing higher frequencies (to make them louder) (see Figure Glossary. 5).
Late reflections—see Diffuse or reverberant sound field.
Line level- -refers to the average voltage level of an audio signal. In professional signal processing components, it is usually +4.0 dBu (dBu is the signal level expressed in decibels referenced to voltage).
LKFS (Loudness, K-weighted, referenced to digital Full Scale)—londness level on an absolute digital scale, it is analogous to dBFS, for one unit of LKFS equals one dB. This terminology is used by the International Telecommunication Union and the Advanced Television Systems Committee (USA); it is identical to LUFS.
Logarithmic—instead of dealing with a number itself, the number is represented by its logarithm (often abbreviated as log). The common log of a number is the power to which the number 10 must be raised to obtain that number; for example, 10 to the power of 2 (10J) equals 100, thus the log of 100 is 2. In a logarithmic scale, distances are proportional to the logs of the numbers represented, but in a linear scale the distances are proportional to the numbers themselves.
Lossless-.....a codec for reducing the size of a file that preserves the original data during coding and decoding; that is, no information is lost in the process.
lossy—a codec that removes information from an audio signal in order to reduce the size of the file. Principles of psychoacoustics are used to identify parts of the signal that humans cannot hear well, and the codec discards less audible components, which has a detrimental effect on sound quality.
Loudness a perceptual quantity: the magnitude of the physiological effect produced when a sound stimulates the ear. This physiological reaction is measured by meters employing an algorithm developed by the International Telecommunication Union (fill) designed to approximate the human perception of level.
IRA (Loudness Range)—originally developed by TC Electronics, it is the overall range of the material from the softest part of a signal to the loudest part, given in 111, To avoid extreme events from affecting the reading, the top 5% and the lowest 10% of the total loudness range is excluded from the measurement (for example, a single gunshot or a long passage of silence in a movie would result in a loudness range that is far too broad).
LU (Loudness Unit) — a relative unit of loudness referenced to something other than digital full scale. It employs K-weighting and is analogous to dB, for one 1.11 equals one dB. This terminology was established by the International Telecommunication Union (ITU).
LUFS (Loudness Unit, referenced to digital Full Scale)—loudness level on an absolute digital scale. It is analogous to dBFS, for one unit of I UFS equals one dB. LIIFS employs K-weighting and is identical to LKFS. This terminology is used by the European Broadcasting Union (FBI!).
Maximum sound pressure level—the maximum sound pressure level a microphone will accept, while producing harmonic distortion of 0.5% at 1,000 Hz,
Near field—the sound field immediately adjacent to a source where direct sound energy dominates, For microphones, this is the distance within which reflected sound remains minimal.
Noise floor (self-noise)—the internal noise level generated by a device or system (for example, a microphone in the absence of soundwaves striking the diaphragm). The noise comes from the resistance of the coil or ribbon in electromagnetic mics and from the thermal noise of the resistors, as well as the electrical noise of the pre-amp, in electrostatic mics. It is expressed in dB (lower numbers are better).
Normalization—a method of adjusting loudness so that listening levels are more consistent for audiences.
Nyquist Theory—between 1924 and 1928, Harry Nyquist discovered that an analog signal can be recreated accurately only if measurements are taken at a rate equal to or greater than twice the highest frequency in the signal. The maximum frequency a digital system can represent is about half the sampling rate.
Overtone series—the frequencies above the fundamental in a complex waveform, that is, a collection of sine waves integer multiples of the fundamental frequency. This series of frequencies gives notes their tonal color or timbre.
"Passive" device—one that does not require external power to function.
Periodic soundwave—a waveform that repeats its shape. All waveforms with pitch are periodic.
Phase—the starting position of a periodic wave in relation to a complete cycle.
Phon—a unit used to relate perceived loudness to the actual sound pressure level of a signal. Phons describe the psychological effect of loudness. The concept of the phon is part of the system known as the equal loudness curves or contours, a system in which the threshold of hearing (0.0 dB) for a 1 kHz sine wave (pure tone) is equated to 0 phons.
Plane soundwave--when waves propagating spherically reach the point at which the surfaces of the spheres become almost flat, the intensity of the waves decrease in a linear fashion, more or less uniformly. At this distance, the Inverse Square Faw no longer applies, because the total area of the plane changes very little as the waves travel forward.
PLR (Peak-to-loudness ratio)—the difference between a signal's maximum true-peak level and its integrated or average loudness.
Polar coordinate graph—a graphing technique used for plotting the directional sensitivity patterns of microphones. Concentric circles represent the sensitivity in terms of dB, and the plotted lines show the amount of attenuation that occurs for specific frequencies arriving from various angles.
Polar patterns—the polar response of a microphone indicates its sensitivity to sounds arriving from any location around the diaphragm.
Pre-delay—the time gap between the arrival of the first wavefront at a listening position in an enclosed space and ihe arrival of the first reflection from a nearby surface (also known as the initial-time-delay gap),
Pressure-gradient transducer—a microphone operating on differences in pressure from soundwaves arriving on both sides of a single diaphragm or on the outer surfaces of two diaphragms joined together (but separated by a backplate).
Pressure transducer—microphone designers clamp a single circular diaphragm inside a completely enclosed casing so that only the front face is exposed to the sound field. Sounds arriving from all directions exert equal force on the diaphragm, and because the diaphragm responds identically to every pressure fluctuation on its surface, these microphones exhibit a non-directional, that is, an omnidirectional (360°), response pattern.
Proximity effect—the discernible increase in the low-frequency response of pressure-gradient microphones (cardioids and ribbons) as sound sources move closer to the diaphragm.
Pulse code modulation (PCM)—invented by Alec Reeves in the late 1930s, PCM has become the standard method for digitally encoding analog waveforms (the technique is used in both WAV and AlI-T). It has three components: sampling, quantizing, and encoding.
Quantization—when the voltage measurement at a sample falls between two of the integers in a scale based on bit depth, quantization rounds (quantizes) the measurement to the closest step of the scale.
Quantization noise—the rounding of measurements taken in the sampling process introduces errors into the system (heard as nonrandom noise), and the size of the error depends on the number of steps the scale contains: a 2-bit scale has 4 possible steps 22, a 3-bit scale 8 steps (23), a 4-bit scale 16 steps (24), an 8-bit scale 256 steps (28), a 16-bit scale 65,536 steps (216), and a 24-bit scale 16,777,216 steps (224). Scales based on higher numbers of bits, then, because they have more finely graded steps, reduce the size of the rounding error and, hence, the amount of noise in the system. In both 16 and 24 bit scales, the rounding error is so small that the noise introduced by quantization is quite faint.
Resolution—an indication of the sound quality of digital audio based on sample rate and bit depth. 'Today "high-resolution audio" has a bit depth of at least 24 and a sample rate at or greater than 88.2 or 96 kHz. The greater the "resolution" of the system, the more accurately it can represent waveforms.
Reverberation—see Diffuse or reverberant sound field.
Reverberation radius—see Critical distance.
Reverberation time (RT)—after a sound source has stopped emitting soundwaves, the time required for the reverberant field to decrease to one-millionth of its original strength, a reduction of 60.0 dB.
Ribbon microphone—these microphones operate on the principle of electromagnetic induction. A thin strip of corrugated aluminum (the diaphragm) is suspended in a magnetic field so that both sides engage with the sound source. These microphones induce an electrical current proportional to the velocity of displacement.
Sample peak—the peak level of a signal that occurs at sampling points.
Sampling - the process of measuring the voltage of an electrical audio signal at a regular interval so that the measurements can later be outputted as binary numbers.
Self-noise-see Noise floor.
Sensitivity—the ratio between the electrical output level of a microphone and the sound pressure level on the diaphragm. Usually expressed in dB, it is a measurement of the output produced when a mic is subjected to a standardized sound pressure level (that is, it indicates how much signal any given SPL produces).
Signal-to-noise ratio (SNR)—the ratio between the useful signal produced by a device and its inherent noise when the signal is removed, expressed in dB (higher numbers are better). A signal-to-noise ratio of 47.0 dB means that the noise floor is 47.0 dB below the signal.
Sine wave—a periodic waveform consisting of a single frequency. A sine wave has pitch but lacks the timbral quality associated with the complex waveforms produced by musical instruments and voices.
Sound pressure level (SPL)—soundwaves cause the air pressure at any given point in a wave's cycle to vary above (compression) or below (rarefaction) barometric pressure. This variation in pressure quantifies the strength of a sound and is called sound pressure (this is what a microphone measures). When expressed on a decibel scale, it is called sound pressure level (20 micro-pascals of air pressure is 0.0 dB on the scale, and this corresponds to the threshold of hearing at 1,000 Hz for a normal human ear).
Spherical soundwave—close to a small sound source (such as the human voice), waves propagate spherically; that is, they travel away from the source in spheres that continuously increase in diameter. These waves decrease in intensity quite rapidly falling by 6.0 dB for every doubling of the distance in a field free of reflections (the Inverse Square Law).
Transducer—a device that converts one form of energy to another (verb: transduce).
Transient—any sudden and brief fluctuation in a signal or sound that disturbs its steady-state nature. Transients generally are of a much higher amplitude than the average level and often cause devices to overload. The initial peak in energy at the beginning of a waveform (the "attack") is called an onset transient (examples: a word which starts with a consonant, the hammer of a piano striking the strings, a rim shot on a snare drum).
Transient response—a measure of the ability of a device to handle and faithfully reproduce sudden fluctuations. In microphones, it is a measure of how quickly a diaphragm responds to abrupt changes in sound pressure (lighter diaphragms respond more quickly).
True peak—the undetected peak level of a signal that occurs between sampling points.
Content
| стр |
Глава 1 | 55 |
Глава 2 | 56 |
Глава 3 | 58 |
Глава 4 | 59 |
Глава 5 | 52 |
Глава 6 | 66 |
Глава 7 | 67 |
Глава 8 | 68 |
Глава 9 | 70 |
Глава 10 | 73 |
Глава 11 | 74 |
Глава 12 | 76 |
Глава 13 | 78 |
Глава 14 | 78 |
Глава 15 | 79 |
Chapter 1
Page 3
Answers:
1. A 2. A 3. C 4. A 5. B
Page 5
Answers:
1. False 2. True 3. False 4. True 5. False
Page 6
1. What is the mathematical relationship between overtones and the fundamental frequency on a vibrating string?
2. How does the harmonic series relate to musical notes on a staff?
3. What contributes to the characteristic timbre of musical instruments?
4. What examples are given to illustrate the differences in overtone emphasis between instruments?
5. What visual tools are mentioned for analyzing the overtone series of a violin note?
1. The mathematical relationship between overtones and the fundamental frequency on a vibrating string is that doubling the frequency halves the wavelength. This relationship can be depicted schematically, showing that the first overtone is at 2 times the frequency of the fundamental, the second overtone at 3 times, and so on.
2. The harmonic series can be represented as notes on a staff, where the blackened notes may be slightly out of tune in an equally tempered scale. This visual representation helps to illustrate the relationship between the fundamental frequency and its overtones.
3. The characteristic timbre of musical instruments is derived from the multiple frequencies that sound together. The varying amplitudes of the partials between different instruments lead to differences in timbre, as specific overtones are emphasized differently.
4. Saxophones and clarinets are given as examples of instruments that emphasize specific overtones, while a violin note is depicted with a complex waveform showing the overtone series above the fundamental frequency.
5. The document mentions a Digital Audio Workstation (DAW) and a spectrogram as visual tools for analyzing the overtone series of a violin note, revealing the complex waveform and the overtones lying above the fundamental.
Page 7-8
1. What is reverberation and how does it affect sound perception in an enclosed space?
2. How does the distance from a sound source affect sound amplitude according to the Inverse Square Law?
3. What factors determine the delay of the earliest identifiable reflections in a room?
4. What happens to the amplitude of reverberant sound compared to early reflections?
5. What is reverberation time (RT60) and how is it defined?
1. Reverberation refers to the accumulation of random reflections of sound waves arriving at a listening position so closely together that the hearer does not perceive each reflection separately. In an enclosed space, sound waves strike surfaces and reflect, leading to a denser wash of sound that prevents listeners from hearing individual reflections, particularly after about 80 ms.
2. According to the Inverse Square Law, sound propagates spherically from a source in the direct or free field, decreasing in amplitude at a rate of 6.0 dB for every doubling of the distance. However, in enclosed spaces, the drop in amplitude is less pronounced as the full decrease occurs only in purely free fields.
3. The delay of the earliest identifiable reflections in a room is determined by the size of the room, the nature of the surfaces, and the position of the listener. These reflections typically begin to arrive 30-80 ms after the first wavefront reaches the listener.
4. Reverberant sound has a lower amplitude than that of early reflections because the surfaces that the sound waves repeatedly strike absorb some of the energy of the late reflections. This results in a gradual decay of the sound pressure level once the source has stopped emitting sound.
5. Reverberation time (RT60) is defined as the time it takes for the sound pressure level of a complex set of room reflections to decrease to one-millionth of its original strength, which corresponds to a reduction of 60.0 dB. This measurement indicates how long reverberation persists in a space after the sound source has stopped.
Page 9
Answers:
1. A 2. B 3. B 4. B 5. B
Page 9-10
Answers:
1. True 2. False 3. False 4. True 5. False
Page 11-12
Answers:
1. A 2.A 3.B 4.A 5.C
Chapter 2
Page 13
Answers:
1. B 2. B 3. A 4. B 5. C
Page 14
1. What does the term 'analog' refer to in the context of audio recording?
2. How does digital audio differ from analog audio in terms of signal representation?
3. What are the three components of Pulse Code Modulation (PCM) and what does each component do?
4. Who invented Pulse Code Modulation (PCM) and in what decade did this occur?
5. What does the term 'bit' stand for in digital audio systems, and what does it signify?
Answers:
1. The term 'analog' refers to the representation of a signal by continuously variable and measurable physical quantities, such as pressure or voltage. In acoustic audio recording, it involves measuring constantly changing air pressure to induce an electrical current, where the voltage fluctuates in a way that corresponds directly to the amplitude variations of soundwaves.
2. Digital audio differs from analog audio in that it converts electrical current into digital information using a series of discrete binary numbers (0, 1) to represent the fluctuating voltage in an analog signal. This allows for more accurate and reliable storage, processing, transmission, and reproduction of audio signals.
3. The three components of Pulse Code Modulation (PCM) are sampling, quantizing, and encoding. During sampling, a device measures the voltage of an analog signal at regular intervals. Quantizing involves rounding those measurements to the nearest values on a predetermined scale, and encoding converts these values into binary digits for use in digital systems.
4. Pulse Code Modulation (PCM) was invented by Alec Reeves in the late 1930s.
5. The term 'bit' stands for 'binary digit.' It signifies a basic unit of information in digital audio systems, which is based on two values, 0 and 1.
Page 15 true/false
Answers:
1. True 2. False 3. True 4. False 5. False
Page 15
Answers: 1.A 2.B 3.B 4.B 5.B
Page 15 true false
Answers:
1. False 2. True 3. True 4. False 5. True
Page 16
Answers:
1. False 2.True 3.False 4.True 5.True
Page 16
1. What is the effect of quantization on a digital signal?
2. How does the number of bits in a scale affect the rounding error?
3. What is the purpose of dithering in audio processing?
4. What happens during the process of requantization?
5. Why is random noise preferred over nonrandom noise in audio signals?
Answers:
1. Quantization changes the signal to match the points on a scale, which introduces errors into the system that are heard as nonrandom noise. The size of the error depends on the number of steps the scale contains, with higher bit scales reducing the size of the rounding error and the amount of noise in the system.
2. A higher number of bits in a scale results in more finely graded values or steps, which reduces the size of the rounding error. For example, a 16-bit scale has 65,536 steps, and a 24-bit scale has 16,777,216 steps, making the rounding error almost imperceptible.
3. Dithering is employed to reduce the audibility of errors in an audio signal when converting from higher bit depths to lower ones. It adds specially constructed random noise to the signal before its bit depth is reduced, replacing nonrandom distortion with a more pleasing noise spectrum.
4. During requantization, bit depth converters discard eight of the last significant bits of the binary word that represents the sampled voltage. This process rounds the samples to match a scale with fewer steps, which introduces faint nonrandom noise, especially noticeable during quiet moments.
5. Research has shown that the more random the noise in a system, the less irritating it will be to listeners. White noise, a type of random noise with a flat frequency spectrum, is often preferred over the nonrandom variety produced during quantization or truncation.
Page 17
1. What is TPDF and how does it relate to audio dithering?
2. What is the purpose of noise shaping in audio production?
3. How does increasing the bit depth from 16 to 24 bits affect audio quality?
4. What is the significance of the noise floor in audio recording?
5. What precautions should engineers take when applying dither to audio tracks?
Answers:
1. TPDF stands for triangular probability density function, which is a type of dither used in audio processing. It is characterized by white noise with a flat frequency spectrum. TPDF dithering helps to reduce quantization errors in audio signals, making the audio sound quieter and more pleasant, especially when compared to other forms of dithering.
2. Noise shaping in audio production is used to concentrate the noise introduced by dithering into less audible frequencies, typically above 15-16 kHz, while reducing noise in the frequency range that humans hear best (between 2.0 and 5.0 kHz and around 12 kHz). This technique enhances the perceived dynamic range of audio, making it sound clearer and more defined.
3. Increasing the bit depth from 16 to 24 bits does not enhance the perceptible resolution or 'oneness' of the audio. Instead, it increases the dynamic range, which is the difference between the softest and loudest sounds, by lowering the noise floor. A 16-bit noise floor is already below what humans can hear, so the main benefit of 24-bit audio is the ability to utilize a higher signal-to-noise ratio.
4. The noise floor in audio recording refers to the level of background noise present in the recording chain. A 16-bit noise floor is already below the threshold of human hearing, meaning that in systems with high levels of inherent noise, dithering may not provide audible benefits, as the existing noise can mask any improvements made by dithering.
5. Engineers should apply dither as the last stage of preparing tracks for delivery to ensure that the added noise is decorrelated from the audio signal. If further processing occurs after dithering, new plugins could accentuate the effects of dithering, potentially leading to audible noise artifacts, especially if noise shaping was part of the dither algorithm.
Page 17
Answers:
1. True 2.False 3.False 4.True 5.False
Page 18 True/false
Answers:
1. False 2. True 3. False 4. True 5. True
Chapter 3
Page 23 True/ False
Answers:
1. False 2. True 3. False 4. True 5. False
Pages 25-26
Answers: 1.B 2.C 3.B 4.A 5.C
Pages 24-25 Questions:
1. What is the basic operating principle of condenser microphones?
2. How does a pressure transducer microphone respond to sound waves?
3. What materials are commonly used for the diaphragm in pressure transducer microphones?
4. What is the purpose of the small holes in the backplate of a pressure transducer?
5. How do manufacturers compensate for high-frequency loss when using omnidirectional microphones in a reverberant sound field?
Answers:
1. Condenser microphones operate electrostatically, using a capsule that consists of a movable diaphragm and a fixed backplate, which form the two electrodes of a capacitor. When sound waves strike the diaphragm, the distance between the two surfaces changes, causing fluctuations in capacitance and resulting in variations in voltage that create an electrical current corresponding to the acoustic soundwave.
2. In pressure transducers, the microphone has a single circular diaphragm clamped inside a completely enclosed casing, exposing only the front face to sound. Sound waves arriving from all directions exert equal force on the diaphragm, causing it to respond identically to every pressure fluctuation, resulting in a nondirectional, omnidirectional response pattern.
3. Manufacturers often use polyethylene for the diaphragm and may coat one side with a metal, such as gold. This combination helps in the effective functioning of the microphone by providing the necessary properties for sound capture.
4. The small holes, evenly distributed across the backplate, serve to dampen the diaphragm's motion by capturing air as the diaphragm flexes. This design helps maintain equal air pressure between the interior chamber and the exterior, allowing for consistent performance under varying atmospheric conditions.
5. Manufacturers compensate for high-frequency loss in omnidirectional microphones used in diffuse or reverberant sound fields by altering the design of free-field omnis to make the capsule more sensitive to higher frequencies, ensuring that less high-frequency detail is lost during recording.
Page 27 F. 3.6 True/False
Answers
1. True 2. False 3. True 4. True 5. False
Pages 28-29
Answers:
1. A 2.B 3.C 4.B 5.B
Pages 27-28 Questions:
1. What is the purpose of introducing a delay in capsules for microphones?
2. What are the acceptance angles for cardioid, supercardioid, and hypercardioid microphones?
3. How do microphone designers define an acceptance angle?
4. What is the significance of locating sound sources within a microphone's pickup angle?
5. What is the function of dual diaphragm capsules in microphones?
Answers:
1. The purpose of introducing a delay in capsules for microphones is to attenuate soundwaves arriving from the rear, allowing for better sound capture from the intended direction.
2. Cardioid microphones generally have a pickup or acceptance angle of about 131В°, which can be narrowed to 115В° for supercardioid and 105В° for hypercardioid microphones.
3. Microphone designers define an acceptance angle by the arc over which the sensitivity of a mic does not fall by more than 3.0 dB, ensuring that the mic's response remains virtually identical within its angle of acceptance.
4. Locating sound sources within a microphone's pickup angle is significant because it allows for the reproduction of sounds as a sonically homogeneous unit without noticeable degradation in tone color.
5. Dual diaphragm capsules in microphones function by having two diaphragms with each side having a cardioid pattern, allowing designers to achieve various response patterns by supplying voltages to the diaphragms independently.
Chapter 4
Page 31 True/False
Answers:1.True 2.False 3.True 4.True 5.False
Pages 32-33 Questions:
1. What does the polar response of a microphone indicate?
2. How do omnidirectional microphones respond to sound from different directions?
3. What happens to the sensitivity of omnidirectional microphones at higher frequencies?
4. What are the characteristics of cardioid microphones?
5. How do supercardioid and hypercardioid microphones differ from cardioid microphones?
Answers:
1. The polar response of a microphone indicates its sensitivity to sounds arriving from any location around the mic. It is plotted on a polar coordinate graph, showing the relative sensitivity of the capsule for a given frequency in relation to the angle of incidence.
2. Omnidirectional microphones pick up sound equally from all directions, resulting in a response pattern that is quite close to a perfect circle, especially for lower frequencies. Below 2.0 kHz, they exhibit a purely omnidirectional pattern.
3. At higher frequencies, omnidirectional microphones exhibit a normal narrowing of response. This attenuation occurs because soundwaves approaching from the rear with wavelengths equal to or less than the microphone's diameter tend not to bend around the end of the mic.
4. Cardioid microphones feature a reasonably wide pickup area at the front and have a large null at the rear. They are designed to capture sound primarily from the front while rejecting sounds from behind.
5. Supercardioid and hypercardioid microphones restrict the width of their front lobes compared to cardioid microphones. They have small lobes of limited sensitivity at the rear, allowing them to reject sounds from behind mainly in the regions to the left and right of the rear lobe.
Pages 32 -33
Answers: 1. A 2.A 3.A 4.B 5.B
Page 35 True/False
Answers: 1. True 2.False 3.True 4.True 5.True
Pages 34-35 Questions:
1. What is the Random Energy Efficiency (REE) of omnidirectional microphones and how is it used as a reference for directional microphones?
2. How do bi-directional and cardioid microphones compare in terms of their REE and what does this indicate about their sound pickup?
3. What is the distance factor of a cardioid microphone and what does it imply about its placement relative to an omnidirectional microphone?
4. What effect does the distance between a microphone and its sound source have on the recording character?
5. What is the critical distance in microphone placement and why is it significant?
Answers:
1. The Random Energy Efficiency (REE) of omnidirectional microphones is assigned a value of 1. This value is used as a reference against which manufacturers compare directional microphones, which have a more selective response and therefore an REE of less than 1.
2. Both bi-directional and cardioid microphones have an REE of 0.333, meaning they react to only about 1/3 of the total sound field. This indicates that the ambient sound picked up by these microphones is 4.8 dB lower than the direct sound.
3. The distance factor of a cardioid microphone is 1.7, which implies that engineers can position it 1.7 times farther away from a sound source than an omnidirectional microphone while still achieving a similar ratio of direct to reverberant sound.
4. The distance between a microphone and its sound source can dramatically affect the recording character. A close placement captures mainly direct sound, exaggerating imperfections, while a distant placement eliminates most direct sound, capturing the source as a whole but losing subtle details.
5. The critical distance is the point at which the level of direct sound equals that of the reverberations. It is significant because it represents a balance between capturing the source's details and the ambient sound, affecting the overall quality of the recording.
Pages 36-37 Questions:
1. What role does the Inverse Square Law play when a microphone is positioned close to a sound source?
2. How does the distance from the sound source affect the sound pressure level on the rear side of the diaphragm?
3. What discrepancies in sound pressure level were determined by John Woram for a microphone placed at different distances from the sound source?
4. What is the effect of proximity on sound pressure levels for lower frequencies compared to higher frequencies?
5. What does Figure 4.9 illustrate regarding the proximity effect in a cardioid microphone?
Answers:
1. When a microphone is positioned close to a sound source, the Inverse Square Law plays an important role in determining the relative pressures sound waves exert on the front and back of the diaphragm. A 6.0 dB drop in loudness occurs only when the distance from the source doubles, which means that at shorter distances, the pressure difference between the front and back of the diaphragm is significant.
2. As the distance from the sound source increases, the sound pressure on the front and back of the diaphragm approaches equality. For example, at 100 centimeters, the difference in distance that sound waves travel to reach either side becomes negligible, resulting in minimal alteration of the sound pressure level on the rear side of the diaphragm.
3. John Woram determined that a microphone with an internal path difference of 1 centimeter would have discrepancies in sound pressure level of 6.02 dB when placed 10 centimeters from the sound source and 0.83 dB at 100 centimeters. This illustrates how distance affects the pressure differences between the front and back of the diaphragm.
4. Within the near field, the pressure differences between the front and back of the diaphragm primarily cause the sound pressure level to rise for lower frequencies. Lower frequencies consist of waves that are longer than the diaphragm's diameter, allowing them to arrive at both sides more in phase, resulting in summation. In contrast, higher frequencies do not bend around the edges of the capsule easily, leading to greater cancellation.
5. Figure 4.9 illustrates how distance determines the degree of proximity effect below 640 Hz for a typical small-diaphragm condenser microphone. It shows the variations in sound pressure levels at different distances, highlighting how closer placements result in a more pronounced proximity effect.
Pages 37-39 Phase
Answers: 1.A 2.B 3.B 4.B 5.C
Pages 39-40 Questions:
1. What is the 'three-to-one' principle in microphone placement, and how does it help in reducing comb filtering effects?
2. What is comb filtering, and how does it affect sound quality?
3. How does the time delay between microphones influence the audibility of comb filtering?
4. What amplitude differences can occur when two microphones have identical output levels, and how can this be mitigated?
5. What is the recommended attenuation level at one microphone to make comb filtering tolerable for most listeners?
Answers:
1. The 'three-to-one' principle in microphone placement suggests that when using multiple microphones, they should be placed at least three times the distance apart from each other as they are from the sound source. This helps in reducing the negative effects of comb filtering by ensuring that the phase issues caused by the microphones capturing out-of-phase signals are minimized, leading to a more balanced sound quality.
2. Comb filtering is an interference effect that occurs when two or more microphones capture out-of-phase signals, resulting in various degrees of sound quality degradation. It can induce undesirable 'phasiness' or 'hollowness' in the audio, particularly when phase-shifted signals are played back or summed to mono.
3. The audibility of comb filtering is influenced by the time delay between the microphones; larger delays tend to make comb filtering inaudible, while shorter delays, especially those under 10 ms, exacerbate the problem and make the phasiness more prominent.
4. When two microphones have identical output levels, boosts of about 6.0 dB and cuts of as much as 30.0 dB can frequently occur. This distortion can be mitigated by reducing the difference in amplitude between the peaks and troughs caused by phase shifts, such as by attenuating the output level of one microphone.
5. An attenuation of 9.0 to 10.0 dB at one of the microphones can decrease amplitude differences to about 4.0 dB, a level at which the worst offender, comb filtering, becomes tolerable for most listeners.
Page 40-41 True-false
Answers:
1.True 2.True 3.False 4.False 5.True
Page 41
Answers: 1.A 2.C 3.C 4.C 5.A
Pages 42-43
Answers: 1.C 2.B 3.B 4.C 5.B
Page 42-43 Questions
1. What is the recommended difference in decibels (dB) between the primary sound source and the background sounds picked up by a microphone in a multiple microphone setup?
2. What principle should recordists follow to maintain sufficient phase integrity in concert or recital hall locations?
3. How can recordists check the attenuation of a microphone during rehearsal?
4. What is the function of the Auto Align plugin in audio recording?
5. What should engineers do in situations where close miking and signal splitting are required?
Answers:
1. The recommended difference in decibels (dB) between the primary sound source and the background sounds picked up by a microphone is at least 9.0 dB.
2. Recordists should follow the 'three-to-one' principle, which states that for every unit of distance between a microphone and its source, nearby microphones should be separated by at least three times that distance.
3. Recordists can check the attenuation of a microphone during rehearsal by switching off every microphone except for one. The level at the remaining microphone should drop by at least 9.0 dB when the performer at that microphone stops playing.
4. The Auto Align plugin analyzes pairs of signals to find the amount of time delay between them and compensates for the difference, helping to correct out-of-phase signals and enhance the sense of space when some form of delay is desirable.
5. In situations where close miking and signal splitting are required, engineers regularly correct small amounts of phase shift during post-production by manually moving the signals to synchronize them or by using a plugin like Sound Radix's Auto Align.
Chapter 5
Pages 45-46
Answers:
1.B 2.B 3.B 4 B 5.A
Pages 45-46 Questions:
1. What is the significance of phase integrity in stereo recording?
2. How do listeners use intensity and time of arrival to locate sounds?
3. What is the optimum angle for stereo perception and why is it important?
4. What happens when there are no level or time differences in a stereo playback system?
5. How does stereo miking utilize level and time-of-arrival differences?
Answers:
1. Phase integrity is crucial in stereo recording as it directly affects the realism of the stereo image produced. Audio engineers have developed various strategies to maintain phase integrity, which helps in replicating how listeners perceive individual instruments or ensembles in live settings.
2. Listeners utilize two primary cues (”intensity, measured in sound pressure level) , and time of arrival, to determine the origin of sounds on a horizontal plane. Sounds that arrive first or are stronger provide directional information, allowing listeners to perceive the spatial arrangement of sounds.
3. The optimum angle for stereo perception is 30В° from an imaginary center line to each loudspeaker. This arrangement is important because it positions both the listener and the speakers at the corners of an equilateral triangle, enhancing the perception of a stereo panorama and the localization of sound.
4. When there are no level or time differences in a stereo playback system, the listener localizes the sound source at 0В°. This means that the sound is perceived as coming from a point equidistant between the two loudspeakers, rather than from the individual speakers.
5. Stereo miking employs the same principles of level and time-of-arrival differences to create phantom images that simulate what a listener would hear at a live event. To properly localize instruments along the stereo plane, the sound sources must fall within the acceptance angle of the microphones.
Pages 47-48
Answers:
1.A 2.C 3.B 4.B 5.B
Pages 48-49 Questions:
1. What is the Blumlein technique and who developed it?
2. How does the Blumlein technique create a phantom image during playback?
3. What happens to the output of the microphones when sound sources are placed to the right or left of the 0В° axis?
4. What are the advantages of positioning the Blumlein microphones within the critical distance in a hall?
5. What is one disadvantage of the Blumlein technique related to side reflections?
Answers:
1. The Blumlein technique is a stereo miking method developed by British inventor Alan Blumlein in the 1930s. It involves using two bi-directional microphones set at an angle of 90В°, allowing for considerable separation between the signals and making each microphone relatively insensitive to sounds coming from the opposite direction.
2. During playback, sound arriving at the 0В° axis of the Blumlein pair creates a strong phantom image midway between the loudspeakers. This occurs because sounds located centrally between the microphones are off-axis by the same amount for each mic, leading to a balanced output.
3. When sound sources are placed to the right or left of the 0В° axis, the output of one microphone increases while the other decreases. At a 45В° angle of incidence, one microphone reaches its maximum output while the other remains at zero, allowing for precise tracking of sound arrival angles.
4. Positioning the Blumlein microphones within the critical distance allows engineers to achieve an appropriate balance between direct and reverberant sound. This setup can enhance the overall sound quality by capturing the ambient space effectively.
5. One disadvantage of the Blumlein technique is that both the positive and negative lobes of the microphones pick up lateral reflections, which can lead to phase cancellations between the two. This may result in a 'hollow' sound quality for listeners.
Pages 48-49
Answers:
1.B 2.B 3.C 4.C 5.B
Pages 49-50 True/ False
Answers:
1. True 2. False 3. True 4. False 5. True
Pages 51-52 True/ False
Answers:
1. False 2. True 3. False 4. True 5. False
Pages 51 -52 Questions:
1. What is the purpose of using near-coincident arrays in audio recording?
2. Describe the ORTF microphone configuration and its significance.
3. What are the characteristics of the sound produced by the ORTF technique at different frequencies?
4. Explain the NOS microphone configuration and its limitations.
5. What do recordists appreciate about the ORTF array according to Bruce and Lenny Bartlett?
Answers:
1. Near-coincident arrays are used in audio recording to mimic human hearing by separating the microphone capsules by distances that approximate the spacing between the ears of an average person. This technique allows for the achievement of stereo images through differences in both intensity and time of arrival, adding a sense of spaciousness to recordings.
2. The ORTF configuration involves angling two cardioid microphones at 110В° and separating the capsules by 17 centimeters (6.7 inches). This setup approximates the average distance between human ears and maintains adequate phase integrity for monophonic broadcasting, producing a pleasing stereo image.
3. At low frequencies, the signals from the microphones in the ORTF technique are virtually phase coherent, leading to significant differences in level between the two channels. However, at higher frequencies (1 kHz and above), slight comb filtering occurs, creating a sense of 'air' or 'openness' in the stereo image.
4. The NOS configuration places two cardioid microphones at an angle of 90В° and separates the capsules by 30 centimeters (11.8 inches). While this technique relies on differences in level between the microphones, the wider spacing can produce phase incoherence that becomes audible at about 250 Hz, making it less useful for monophonic broadcasting.
5. Recordists appreciate the ORTF array for providing a good overall compromise of localization accuracy, image sharpness, an even balance across the stage, and ambient warmth, as noted by Bruce and Lenny Bartlett.
Pages 53-54
Answers:
1.B 2.B 3.A 4.A 5.C
Pages 53-54 Questions:
1. Describe the purpose of the DIN configuration in stereo microphone techniques.
2. What are the key features of the OSS (Optimum Stereo Signal) system proposed by Lung Lecklin?
3. Explain how the acoustic baffle in Lecklin's OSS technique contributes to the stereo image.
4. What is the recommended spacing between unidirectional microphones in Lecklin's expanded OSS technique?
5. What conditions are ideal for the array produced by the OSS technique to work best?
Answers:
1. The DIN configuration is designed to produce a stereo image through a blend of level and time-of-arrival differences. It is particularly useful for recording at shorter distances, making it ideal for instruments like pianos, small ensembles, or sections of an orchestra.
2. The OSS system approximates natural binaural hearing by simulating the differences in level, time, and frequency response that listeners experience at a live event. It aims to reproduce a realistic stereo image through loudspeakers, and it involves placing unidirectional microphones on either side of a round disk to replicate the ear positions of the average human head.
3. The acoustic baffle between the microphones improves the apparent width and clarity of the stereo image. It allows both microphones to receive the same signal below 200 Hz, while diffraction at the edge of the disk increases the effect of separation at higher frequencies, enhancing the stereo effect.
4. In Lecklin's expanded OSS technique, he recommended a spacing of 36 centimeters (14.25 inches) between the unidirectional microphones mounted on either side of a 35-centimeter (13.75-inch) disk.
5. The OSS technique works best with ensembles that can achieve internal balance, such as classical music, and in rooms that have nominal or short reverberation times. For optimal sound quality, recordists often place the array at or within the critical distance.
Pages 54-55-56
Answers:
1.B 2.B 3. B 4.B 5.A
Pages 54-55-56 Questions:
1. What is the primary reason audio engineers use unidirectional microphones in spaced microphone applications?
2. How does the placement of microphones affect the balance between direct sound and reverberation?
3. What is the significance of the spacing between microphones in creating a stereo image?
4. What issue can arise when using widely spaced microphone pairs, and how can it be addressed?
5. What effect does random phase relationships between channels have on the sound of a recording?
Answers:
1. Audio engineers favor unidirectional microphones for spaced microphone applications primarily because omnidirectional microphones pick up sound equally well from all angles. This characteristic makes unidirectional microphones ideal for achieving the desired proportion of direct to reverberant sound, allowing for better control over the recording environment.
2. The placement of microphones affects the balance between direct sound and reverberation by determining the amount of reverberation captured. Recordists often use the critical distance as a starting point for this balance; the closer the microphones are placed to the sound source, the more the direct sound dominates, resulting in less reverberation being picked up.
3. The spacing between microphones is significant in creating a stereo image because it allows for differences in arrival time of sound to each microphone. This difference helps to create a stereo effect, where sound sources located at different positions can be perceived as coming from specific locations in the sound stage, enhancing the listening experience.
4. When using widely spaced microphone pairs, an issue that can arise is the creation of a 'hole in the middle' of the sound stage, where the center may lack coverage. This problem can be addressed by adding a third microphone or another pair of omnidirectional microphones placed midway between the main pair, mixing the extra signal into both channels to fill the gap.
5. Random phase relationships between channels can cause comb filtering, which results in a diffuse and blended character of the overall sound in a recording. This effect makes it harder for listeners to localize off-center stereo images, leading to a less focused sound, but for some, it creates a sense of spaciousness that enhances the perception of concert hall reverberation.
Pages 56-57 True/ False
Answers:
1. True 2. False 3. True 4. False 5. True
Pages 56-57 Questions:
1. What technique did Tony Faulkner develop for microphone placement, and what is its primary purpose?
2. How does the distance of the microphones from the ensemble affect the sound quality according to Faulkner's method?
3. What adjustments do recordists make when they need to place microphones closer to the ensemble?
4. Describe the expanded Faulkner phased array technique used for large orchestras. What components are involved?
5. What is the significance of the microphone placement in very reverberant environments, such as large churches?
Answers:
1. Tony Faulkner developed a technique of spacing two bi-directional microphones 20 centimeters apart, directly facing the sound source. The primary purpose of this technique is to produce coherent imaging and an open feeling in the sound, which is particularly useful in very reverberant environments.
2. The increased distance from the ensemble allows for clarity at the center of the stereo image, which Faulkner refers to as a dryer sound. This distance also enables engineers to set the microphones at a lower height, potentially as low as ear level.
3. When recordists need to place the microphones closer to the ensemble, they sometimes use unidirectional flanking microphones, which are roughly a meter or 3.3 feet apart. This adjustment helps counter the narrow stereo spread that might result from using the spaced pair alone.
4. The expanded Faulkner phased array technique for large orchestras involves placing two unidirectional microphones 67 centimeters apart over the conductor's head, with an ORTF pair of cardioids in the middle, spaced 41 centimeters apart. This configuration provides a sense of presence while the omnis create a feeling of ambience around the performers.
5. In very reverberant environments like large churches, the microphone placement technique allows recordists to achieve clarity at the center of the stereo image while still capturing the natural ambience of the room. This is crucial for producing a balanced and clear sound in such acoustically challenging spaces.
Pages 57-58 True/False
Answers:
1. True 2. False 3. True 4. True 5. False
Chapter 6
Page 59-60
Answers: 1.A 2.C 3.B 4.B 5.C
Pages 59-60 Questions:
1. What are the key criteria for assessing audio signals during the recording process according to the European Broadcasting Union?
2. How does the perception of room ambience affect the recording of soloists or ensembles?
3. What role does transparency play in critical listening during the recording process?
4. Why is it important for engineers to ensure the intelligibility of individual elements in a recording?
5. What is the significance of dynamic range in a music recording?
Answers:
1. The key criteria for assessing audio signals include Spatial Environment, Transparency, Timbre, Loudness, Stereo Image, and Noise and Distortion. These criteria help engineers evaluate the sound quality and ensure that recordings enhance the listener's experience.
2. The perception of room ambience, particularly the reverberation time and sonic characteristics of reflections, helps place soloists or ensembles in a natural-sounding environment. This determines whether the natural reverberant qualities of the space need to be enhanced by artificial means to achieve the desired sound.
3. Transparency refers to the clarity of details in a recording, which includes the relationship between reverberation and the intelligibility of instruments or words. It allows listeners to identify and differentiate between instruments and voices, enhancing the overall listening experience.
4. Ensuring the intelligibility of individual elements in a recording is crucial because it allows listeners to identify and distinguish between different instruments and voices, even amidst reverberation. This clarity contributes to the overall quality and emotional impact of the music.
5. Dynamic range in a music recording refers to the variations of light and shade across or between phrases and larger sections of the music. It engages the hearers' emotions directly and contributes to the realism and expressiveness of the performance.
Page 60 true false
Answers:
1. True 2.False 3.True 4.True 5.True
Pages 61-62
Answers:
1.D 2.C 3.A 4.A 5.B
Pages 61-62 Questions:
1. What factors influence the placement of microphones in relation to room ambience during tracking?
2. How do engineers determine the critical distance in a room?
3. What is the significance of the sweet spot in microphone placement?
4. What is the third option for microphone placement mentioned in the document, and what does it require?
5. What considerations do recordists take into account when deciding on the perspective to present to listeners?
Answers:
1. The placement of microphones is influenced by the nature of the space, the desired amount of natural reverberation, and the critical distance, which is the point where direct and reverberant sound are equal in level. Engineers may choose to place microphones closer to the sound source for more direct sound or farther away to capture more reverberation.
2. Engineers determine the critical distance by using an SPL meter to measure the sound pressure level (SPL) at various distances from the sound source. They start by measuring close to the source and then double the distance, noting the SPL drop, until they find the point where the level remains constant, indicating the critical distance.
3. The sweet spot is the area within the critical distance where engineers can achieve the best ratio of direct to reverberant sound. It allows for a balanced perspective that captures detail without making the sound source seem too distant or muddy.
4. The third option involves placing microphones very close to the sound source to completely defeat room reflections. This approach requires the addition of artificial reverberation during post-production to enhance the sound quality, especially in concert halls with poor natural ambience.
5. Recordists consider the size of the room they wish to emulate, ranging from larger spaces to more intimate rooms. This helps them determine how to position microphones and mix sounds to create an appropriate listening experience that reflects the intended environment.
Chapter 7
Pages 65-66
Answers:
1.C 2.A 3.B 4.B 5.D
Pages 65-66
1. What historical issue in telephonic communication led to the development of equalizers?
2. How do audio editors use equalization (EQ) in their work?
3. What are the different types of digital filters mentioned in the document, and how are they classified?
4. What is the significance of the cut-off frequency in a filter?
5. What is the difference between a gentle and a steep attenuation slope in filters?
Answers:
1. In the early days of telephonic communication, long cable runs caused a significant loss of high-frequency content, resulting in the output signal being markedly inferior to the input signal. To compensate for this deficiency, electronics engineers developed circuits called equalizers to boost high frequencies, allowing the output to become roughly 'equal' to the input.
2. Audio editors use equalization (EQ) to modify the frequency content of audio signals to enhance the aesthetic appeal of individual or combined tracks. They achieve these alterations through digital filters, which allow them to adjust the balance of various sine waves that make up complex waveforms.
3. Digital filters are classified into several types, including pass filters, shelf filters, parametric filters, and graphic filters. Pass filters allow certain frequencies to pass while attenuating others, and they can be further categorized into high-pass filters (HPF) and low-pass filters (LPF), depending on whether they allow frequencies above or below a cut-off point to pass.
4. The cut-off frequency is significant because it defines the point at which the response of the filter is 3.0 dB below the nominal level of the unaffected signal. This point indicates the transition between the frequencies that are allowed to pass through the filter and those that are attenuated.
5. The difference between a gentle and a steep attenuation slope in filters is determined by the rate of decibels of reduction across an octave span. A drop of 6.0 dB per octave produces a gentle attenuation slope, while a steeper descent of 12.0 dB per octave results in a much sharper attenuation slope.
Pages 67-68 true false
Answers:
1. False 2.True 3.True 4.True 5.False
Pages 68-69
Answers: 1.A 2.D 3.B 4.B 5.C
Page 70 Questions:
1. What is the purpose of a high-pass filter in audio mixing?
2. How can a shelf filter for higher frequencies enhance the audio mix?
3. What is the significance of sweeping in audio mixing?
4. Why is the 2-5 kHz frequency range important in audio mixing?
5. What did the research by Fletcher and Munson reveal about human hearing and frequency perception?
Answers:
1. A high-pass filter helps to remove undesirable low-frequency information, often referred to as 'muddy' or 'rumble' sounds, typically in the range of 40-100 Hz.
2. A shelf filter for higher frequencies can enhance the audio mix by gently boosting the frequencies above 8-10 kHz, which can create the impression of 'sheen' in the sound.
3. Sweeping is significant in audio mixing as it helps to identify offending frequencies that can be difficult to hear. By sweeping the mid-range with a heightened bell curve, typically around 800 Hz-1 kHz, engineers can find and cut these frequencies to lessen their negative effect.
4. The 2-5 kHz frequency range is important in audio mixing because human hearing is most sensitive to sounds in this range. Small boosts in this area can add brightness to the audio.
5. The research by Fletcher and Munson revealed that human hearing is not equally sensitive to all frequencies, and playback level affects how listeners perceive frequency. Their findings led to the creation of equal loudness curves, which show how humans typically respond to frequencies at different loudness levels.
Chapter 8
Page 73 Questions:
1. What was the original purpose of compressors in audio engineering?
2. How do modern compressors differ from the manual methods used by engineers before their invention?
3. What are the three main stages of a compressor?
4. What user-adjustable features are found in the gain-control stage of a compressor?
5. Why do engineers apply make-up gain after the compression process?
Answers:
1. Compressors were first invented for use in live radio broadcasts to prevent the level of an announcer's voice from suddenly overloading the signal chain.
2. Modern compressors can react instantaneously to fluctuations in level, allowing them to balance dynamics from note to note, whereas engineers previously relied on manually riding faders to reduce dynamics.
3. The three main stages of a compressor are level detection (either peak or RMS), gain control, and make-up gain.
4. The gain-control stage contains several user-adjustable features such as threshold, ratio, attack, release, knee, and hold.
5. Engineers apply make-up gain to amplify the output of the compressor so that the perceived loudness of the overall signal appears greater.
Pages 73-75
Answers: 1.A 2.B 3.D 4.A 5.C
Pages 75-77 true false
Answers:
1. True 2.True 3.False 4.True 5.False
Pages 77-78 Questions:
1. What is the role of the attack control in a compressor?
2. How does the release control affect the compressor's output?
3. What is the difference between soft knee and hard knee in a compressor?
4. What is the purpose of the final gain control in a compressor?
5. How does the hold control function in a compressor?
Answers:
1. The attack control in a compressor determines the length of time between the moment an audio signal exceeds the threshold and the start of gain reduction. Longer attack times (20-50 ms) often provide a more natural representation of the source.
2. The release control determines how quickly the compressor returns the attenuated signal back to its original level. Longer release times (around 135 ms) make the transition back to the original level less obvious, while shorter release times can create a 'pumping' effect.
3. The soft knee function within a compressor allows compression to be introduced gradually across the threshold, softening the onset of gain reduction. In contrast, a hard knee introduces gain reduction suddenly when the threshold is exceeded.
4. The final gain control, also known as make-up gain or output, is used to compensate for the perceived loudness discrepancy that occurs after compression. It boosts the output of the gain-modification stage to regain lost loudness.
5. The hold control in a compressor allows users to decide how long the compressor 'holds' the gain reduction before the release begins, providing additional control over the dynamic response.
Page 79 true false
Answers:
1.True 2.False 3.False: inter-sample peaks
4.True 5.False: true peak limit
Pages 80-81 true false
Answers:
1.True 2.False 3.False 4.True 5. False
Pages 81-82
Answers: 1 B 2.A 3.C 4.D 5.B
Page 82 true false
Answers:
1 False 2.True 3.False 4.True 5.True
Pages 83-84
Answers: 1.B 2.C 3.A 4.D 5.C
Pages 83-84 Questions:
1. Describe the approach Sonnox takes in designing the GUI for their Oxford Dynamic EQ compared to other dynamic EQs.
2. What types of filters does the Oxford Dynamic EQ provide, and how can they be applied to the frequency bands?
3. Explain how the plugin prevents over-processing of the audio signal.
4. What functionality does the headphone icon provide in the Oxford Dynamic EQ?
5. How do the Attack and Release parameters affect the dynamics processing in the Oxford Dynamic EQ?
Answers:
1. Sonnox takes a different approach by using a graphical interface similar to those found in parametric EQ plugins, rather than replicating the control knobs of a hardware unit like Brainworx. This allows users to adjust parameters visually, making it more intuitive.
2. The Oxford Dynamic EQ provides three types of filters: low shelf, bell, and high shelf. These filters can be applied to each of the five distinct frequency bands, allowing users to adjust the center or corner frequency by dragging control points.
3. The plugin is designed to constrain the amount of gain change between the offset and target settings, which prevents over-processing. This means that the gain cannot exceed certain limits, ensuring a more controlled audio output.
4. When engineers click on the headphone icon in the EQ line, they can listen to just the frequency range that will be processed. This feature allows for precise monitoring of the adjustments being made to the audio signal.
5. The Attack parameter determines how quickly the band approaches the target level, while the Release knob sets how slowly the band will return to the offset level. Together, these parameters control the responsiveness of the dynamics processing applied to the audio signal.
Pages 84-85 true false
Answers:
1.True 2.False 3.True 4.True 5.True
Pages 86-87
Answers: 1.A 2.D 3.B 4.C 5.A
Pages 86-87 Questions:
1. Explain the basic operation of a de-esser and how it affects the vocal track.
2. What features does the Sonnox SuprEsser offer to audio editors for treating excessive sibilance?
3. Describe the function of the band-pass and band-reject filters in the Sonnox SuprEsser.
4. What are the three listening modes available in the Sonnox SuprEsser, and what do they do?
5. How does the de-esser ensure that listeners do not hear the filtered signal sent along the side chain?
Answers:
1. A de-esser operates on the principle of side chaining, where it divides the input signal into two paths. One path retains the original vocal track and goes to the compressor, while the other path, which is a modified version of the input signal, travels to the level detector. This modified signal receives a boost in the high-frequency band that contains sibilance, triggering the compressor to attenuate those frequencies in the original track, thus reducing harshness while allowing other vocal components to pass through unaltered.
2. The Sonnox SuprEsser employs principles similar to traditional de-essers and provides audio editors with considerable flexibility in managing excessive sibilance. It is designed as a compressor linked to two filters: a narrow band-pass filter and a band-reject filter. This allows users to isolate problematic audio frequencies and apply compression selectively, enhancing control over the de-essing process.
3. In the Sonnox SuprEsser, the band-pass filter isolates the problematic audio frequencies specified by the user, allowing only that content to go to the compressor for attenuation. Conversely, the band-reject filter contains the remaining frequency components of the input signal, effectively removing the isolated frequencies sent to the compressor. This results in an output that retains the original signal minus the excessive sibilance.
4. The Sonnox SuprEsser features three listening modes: the Inside button, which solos the output of the band-pass filter; the Outside button, which reveals the output of the band-reject filter; and the Mix button, which blends the outputs of both filters in a 50:50 ratio. These modes allow users to hear the operation of the plugin from different perspectives, aiding in the adjustment of sibilance treatment.
5. The de-esser is designed so that the filtered signal sent along the side chain functions solely to trigger the compressor, meaning listeners do not hear this signal. Instead, the compressor applies attenuation exclusively to the unmodified input, allowing the remaining components of the vocal frequency spectrum to pass through the device unaltered, thus maintaining the integrity of the original vocal track.
Pages 87-88
Answers: 1.A 2.B 3.A 4.C 5.A
Chapter 9
Page 89-90 Figure 9.1
Answers:
1. A 2.A 3.A 4.A 5.A
*PAGE 90
1. What features does Sonnox's Oxford Reverb offer for controlling early reflections?
2. How does the 'Taper' control in Oxford Reverb affect the sound?
3. What is the purpose of the 'Absorption' control in the Oxford Reverb plugin?
4. What controls are available for shaping the reverb tail in Oxford Reverb?
5. How does increasing the 'Feedback' control in Oxford Reverb affect the sound?
Answers:
1. Sonnox's Oxford Reverb offers several features for controlling early reflections, including the ability to set the shape and size of the simulated room, as well as controls for width, taper, feed along, feedback, and absorption. Width determines the stereo separation of the reflections, taper affects the loudness of the reflections based on distance, feed along specifies the reinjection of distributed sound, feedback adjusts the proportion of reflections recirculated, and absorption models high-frequency reduction based on the nature of reflective surfaces.
2. The 'Taper' control in Oxford Reverb affects the loudness of the reflections in relation to the distance they travel. As sound bounces around the room, the levels of the reflections are reduced, and taper allows users to adjust this reduction in loudness.
3. The 'Absorption' control in the Oxford Reverb plugin models the amount of high-frequency reduction that occurs in a room as soundwaves interact with various surfaces. This allows users to mimic the reflective characteristics of different surfaces, such as the greater absorption of soft furnishings compared to the lesser absorption of harder walls.
4. Oxford Reverb provides several controls for shaping the reverb tail, including reverb time, overall size, dispersion, phase difference, phase modulation, absorption, and diversity. These controls allow users to customize the characteristics of the reverb tail to fit their audio needs.
5. Increasing the 'Feedback' control in Oxford Reverb adjusts the proportion of reflections that are recirculated within the simulated environment. This can lengthen the duration of the reflections and enhance the effect of room-mode frequencies, contributing to a more pronounced 'beeriness' in the sound.
Page 91 Figure.2
Answers:
1.B 2.C 3.A 4.B 5.B
*PAGE 91
1. What role does 'Overall Size' play in creating the aural image of space in reverberation?
2. How does 'Dispersion' affect the reflections in reverberation?
3. What is the effect of 'Phase Difference' on the stereo sound field?
4. Why is EQ used in reverberation plugins, and what is its effect on high frequencies?
5. What adjustments can be made to lower frequencies in reverberation to enhance warmth?
Answers:
1. Overall Size creates the aural image of space through the size of the delays within the tail of the reverberation. Larger settings provide the greatest sense of space but result in a slower buildup in the density of the reflections.
2. Dispersion manages the rate at which the reflections build over time, giving the engineer control over the complexity and texture of the reverberation.
3. Phase Difference allows users to manipulate the rate at which phase disparity grows between right and left channels, with greater settings causing a widening and deepening of the stereo sound field.
4. EQ is used in reverberation plugins to more closely emulate the natural frequency response of physical spaces. It is particularly important because rooms absorb high frequencies more easily than low frequencies, so EQ can lessen the prominence of higher frequencies in the reverb to make it sound more realistic.
5. To create greater warmth in reverberation, a gentle boost to lower frequencies can be applied. However, if the low-end content in a track already makes the reverb sound too 'boomy,' judiciously applied EQ can mitigate the effect of undesirable frequencies.
Page 92 Figure.3 True/False
Answers:
1. True 2.False 3.False 4.True 5.True
Page 92-93 Figure 9.4
1. What is the purpose of the Pre-delay in reverb settings?
2. How does the Reverb Size affect the sound of the reverb?
3. What does the Damping Frequency knob control in reverb settings?
4. What is the effect of the Width control on reverb?
5. What is the function of the Tail Suppress feature in reverb settings?
Answers:
1. The Pre-delay helps increase the clarity and intelligibility of the signal by allowing a brief moment before the reverb effect begins, which can enhance the perception of the sound.
2. The Reverb Size determines the dimensions of the room, which affects the buildup of reverb. In larger spaces, the accumulation of reflections occurs more slowly, making the reverb sound less dense.
3. The Damping Frequency knob sets the frequency above which damping takes place, affecting how quickly the highest frequencies die away or roll off in the reverb effect.
4. The Width control applies only to the tail of the reverb, where a wider setting will 'open up' the space, while a narrower tail will 'focus' the sound more tightly.
5. The Tail Suppress feature lowers the level of the late reflections when a loud input signal might over-trigger the reverb, helping to maintain clarity in the mix.
Page 94 Figure9.5 ( F 9.6)
Answers:
1. A 2. A 3. A4. A5. A
Page 94 Figure 9.6
1. What are the three general types of reverb that engineers can select from in the Attack pane?
2. How does the Diffuser Size knob affect the reverb settings?
3. What is the purpose of the Envelope Attack control in the plugin?
4. What does the Envelope Time control do in the plugin?
5. How does the Envelope Slope controller affect the signal as it enters the plugin?
Answers:
1. The three general types of reverb that engineers can select from in the Attack pane are plate, chamber, and hall.
2. The Diffuser Size knob models the dimensions of the irregularities on or near the reflective surfaces, which influences how soundwaves bounce off the objects they strike.
3. The Envelope Attack control adjusts the way the signal enters the plugin, affecting the strength of early audio energy versus later energy based on the attack values.
4. The Envelope Time control adjusts how long it takes for the signal to enter the plugin, with shorter values allowing for quicker injection and longer values resulting in a more gradual 'speak' of the reverb.
5. The Envelope Slope controller consists of a low-pass filter that models air absorption, and its various settings change how strongly the filter affects the later energy of the signal.
Page 95 Figure 9.7 True/False
Answers:
1. True 2. False 3. True 4. False 5. True
Page 96-97 Figure 9.8
1. What are the three main functions of the dials in the Warp pane?
2. How does the Threshold dial affect the compressor's processing?
3. What is the purpose of the Cut button in the Warp pane?
4. What does the Attack dial control in the compressor?
5. What options are available for changing the bit depth in the Warp pane?
Answers:
1. The three main functions of the dials in the Warp pane are to determine the dynamic range of the input, add overdrive to the signal, and change the bit depth (word size) of the processing.
2. The Threshold dial sets the level at which the compressor's processing begins. When the audio signal exceeds this threshold, the compressor starts to process the signal.
3. The Cut button allows users to set a lower threshold for the compressor. Any signal below this Cut level will not be boosted, which is useful for avoiding the amplification of low-level noise.
4. The Attack dial controls the length of time between the moment an audio signal exceeds the threshold and the start of the compressor's action.
5. The Word Size knob in the Warp pane allows users to change the bit depth of the processing from floating-point to 24, 20, 18, 16, 14, or 12 bits.
Page 97-98 Figure 9.9 F 9.10 True/false
Answers:
1. False 2.True 3.False 4.True 5.True
Page 99 Figure 9.11 True/False
Answers:
1. True 2. False 3. True 4. True 5. True
Page 99 Figure 9.11
Answers:
1. A 2. C 3. B 4. B 5. C
Page 100 Figure 9.12-13 True/false
Answers:
1. True 2. False 3. False 4. True 5. False
Page 101 Figure 9.14
Answers:
1. B 2. A 3. A 4. A 5. A
Page 102 Figure 9.15-16
1. What is the principle behind convolution in reverb plugins?
2. How do engineers create impulse responses for convolution reverb?
3. What challenges do convolution reverbs face regarding impulse responses?
4. What unique approach does EastWest's Spaces reverb take in capturing impulse responses?
5. What controls do users have access to in the GUI of EastWest's Spaces II?
Answers:
1. The principle behind convolution in reverb plugins is the blending or convolving of one signal with another. These plugins combine dry input signals with previously recorded impulse responses to simulate the sound of music performed in ambient spaces. Engineers create impulse responses by recording all the room reflections generated by an initial stimulus.
2. Engineers create impulse responses by recording all the room reflections generated by an initial stimulus, such as a short burst of sound or a full-range frequency sweep. After the sound of the stimulus is removed through a process known as deconvolution, the remaining reverberation characteristics, or the room's reverb tail, can be added to a dry signal.
3. Convolution reverbs face challenges related to the careful creation of impulse responses. If the impulse responses are not created carefully, the resulting simulations can be quite sterile. This is because convolution reverbs require a significant amount of processing power to add impulse responses to audio samples.
4. EastWest's Spaces reverb takes a unique approach by focusing on the reverberation characteristics produced by specific instruments rather than the generalized response of a space. This involves positioning loudspeakers to mimic the projection properties of various instruments, capturing the response from the exact position an instrument would be on stage.
5. In the GUI of EastWest's Spaces II, users have access to controls that allow them to select the level of the audio's input, the length of the pre-delay in milliseconds, the extent of the dry sound included in the output, and the output level of the wet signal. Additionally, users can adjust the degree of high- and low-frequency roll-off through pass filters and tailor the decay time of the impulse.
Page 102 Convolution
Answers:
1. A 2.C 3.B 4.B 5.B
Chapter 10
Page 105 true false
Answers:
1.True 2.False 3.True 4.False 5.True
Page 106 true false
Answers:
1. True 2.True 3.False 4.False 5.True
Page 105
1. What is the primary function of a container in digital audio files?
2. What are the two types of compression methods used by codecs?
3. What is the difference between FLAC and ALAC in terms of their container formats?
4. What is the significance of the Free Lossless Audio Codec (FLAC) in the audio industry?
5. What are the two standard uncompressed file types available to recordists?
Answers:
1. A container functions as a wrapper that holds digital audio information along with metadata. It specifies the organization of the data within them, such as interleaving video/audio data in chunks for streaming purposes.
2. Codecs employ two types of compression methods: lossless and lossy. Lossless algorithms compress data to about half the size of the original while allowing perfect reconstruction during decoding, whereas lossy codecs remove information that humans do not hear well, achieving greater reductions at the expense of sound quality.
3. FLAC uses the Ogg container and is also known as Ogg FLAC, while ALAC employs an m4p audio-only container with the file extension m4a.
4. FLAC compresses data without any loss of audio quality and is made available at no charge by Xiph.Org, making it the standard lossless audio codec internationally.
5. The two standard uncompressed file types available to recordists are WAV (Waveform Audio File Format) and AIFF (Apple's Audio Interchange File Format), both of which use a RIFF container to hold PCM digital audio information.
Page 107
Answers:
1. B 2.C 3.A 4.D 5.B
Page 107-108 True /false
Answers:
1. True 2.True 3.False 4.True 5.False
Page 107
1. What are the file sizes of the original WAV file and the converted WAV file according to Table 10.1?
2. What is the definition of loudness as defined by the Advanced Television Systems Committee (ATSC) in 2013?
3. What changes did the International Telecommunication Union (ITU) introduce in 2006 regarding loudness measurement?
4. What was the purpose of the European Broadcasting Union's (EBU) proposal four years after the ITU's algorithms were made available?
5. What types of audio files were produced for Monteverdi's 'Si dolce e'I toronto' and who produced them?
Answers:
1. The original WAV file size is 108.4 MB, and the converted WAV file size is 33.0 MB.
2. Loudness is defined as a perceptual quantity, which is the magnitude of the physiological effect produced when a sound stimulates the ear.
3. In 2006, the ITU introduced algorithms for objectively measuring both the perceived and the peak level of digital audio signals, allowing broadcasters to assess audio volume perception instead of just measuring peak loudness.
4. The EBU proposed a metering system that could deal with various facets of loudness, improving the measurement and standardization of loudness in audio broadcasting.
5. The audio files produced for Monteverdi's 'Si dolce e'I toronto' include WAV (converted), FLAC, and Ogg Vorbis files, produced by Weiss Engineering's Saracen, while AAC iTunes+ and mp3 files were generated by Sonnox's Fraunhofer Pro-Codec.
Chapter 11
Page 125
Answers:
1. A 2. C 3. B 4. B 5. B
Page 125 Questions:
1. What are the two main approaches engineers take when recording the sound of a grand piano?
2. How does the raised lid of a grand piano affect the sound frequencies?
3. What is the significance of microphone placement in relation to the piano's sound quality?
4. What frequency range do engineers need to consider when selecting microphones for recording a grand piano?
5. Why might engineers prefer unidirectional microphones when recording a grand piano?
Answers:
1. Engineers generally take two main approaches when recording the sound of a grand piano: one approach involves capturing the instrument from a distance to allow soundwaves to propagate and coalesce into a unified image, while the other approach involves closer microphone positioning to capture the instrument's sound from a nearer perspective.
2. The raised lid of a grand piano acts as a barrier that reflects mid and high frequencies better than lower ones, causing the tonal quality of the instrument to be considerably muted above and behind the lid.
3. Microphone placement is significant because reflections from walls, floors, and ceilings can compromise sound quality. For instance, placing the piano too close to a wall may create unwelcome early reflections or boost lower frequencies, while the floor generates the greatest number of early reflections, which can be mitigated by placing a rug under the instrument.
4. Engineers need to consider the entire frequency range of the fundamentals emanating from an 88-key piano, which spans from 27.5 to 4,186 Hz, plus harmonics above 10 kHz for the highest notes of the instrument.
5. Engineers might prefer unidirectional microphones because they can achieve a uniform tonal quality across the piano's frequency spectrum. While cardioid microphones can also produce a balanced sound, they need to be located farther from the instrument, and their proximity effect can make them less attractive for close placement.
Pages 125-126 Questions:
1. What are the three factors that engineers consider when deciding on the location for stereo miking of a grand piano?
2. What is the typical distance range for an ORTF pair of microphones from the piano?
3. Describe the A-B spaced pair technique and its typical microphone spacing.
4. What is the purpose of the mid-side coincident pair technique in recording?
5. How does the placement of microphones affect the sound captured from the piano?
Answers:
1. The three factors that engineers consider are tonal balance between the instrument's registers, an acceptable ratio of direct to ambient sound, and a realistic stereo image.
2. An ORTF pair of microphones may be located anywhere from 70 centimeters (27.5 inches) to 3 or 4 meters (10 to 13 feet) in front of the piano.
3. The A-B spaced pair technique involves using omnidirectional microphones spaced typically between 20 to 61 centimeters (8-24 inches) apart, with the exact spacing determined by the desired width of the stereo image and the location of the pair along the piano's perimeter.
4. The mid-side coincident pair technique allows engineers to adjust the width of the stereo image to taste after tracking has been completed, making it a flexible option for capturing sound.
5. Moving the microphones from the tail around to the front of the piano increases the amount of mid- and high-frequency information captured, and recordists usually experiment to find the best location on the perimeter that suits the project.
Pages 125-126 True/false
Answers:
1.True 2.False 3.False 4.True 5.False
Page 127-128 Questions:
1. What is the purpose of placing microphones inside the piano when recording in unfavorable room acoustics?
2. What is the recommended height for positioning microphones above the strings to capture a broad spectrum of sound?
3. How do engineers typically position microphones in a spaced pair configuration for recording piano?
4. What is the effect of using coincident pairs of microphones positioned over the hammers of the piano?
5. What is the advantage of using a quasi-ORTF array with cardioid microphones in piano recording?
Answers:
1. Placing microphones inside the piano helps to minimize or eliminate the negative effects of the room's reverberation characteristics, allowing for a clearer recording of the instrument.
2. Microphones are recommended to be positioned at least 20 to 25 centimeters (8 to 10 inches) above the strings, with some engineers placing them as high as 61 centimeters (24 inches) for better sound capture.
3. In a spaced pair configuration, engineers usually place one unidirectional microphone over the treble strings and the other above the midrange and bass strings to achieve an even representation of lower frequencies.
4. Using coincident pairs of microphones positioned directly over the hammers produces a brighter, more percussive sound, while placing them farther along the strings can provide greater warmth.
5. A quasi-ORTF array with cardioid microphones can yield an acceptable sound by providing even coverage of upper, mid, and low frequencies, with many recordists finding success by placing the mics in the middle of the piano's soundboard.
Pages 127-128
Answers:
1. B 2. B 3. B 4. B 5. C
Page 126 true false
Answers:
1.True 2.False 3.True 4.False 5.True
Page 127
Answers: 1.B 2.B 3.B 4.C 5.C
Page 128 true false
Answers:
1. True 2.False 3.True 4.True 5.False
Page 128
Answers:
1.B 2.B 3.C 4.B 5.C
Chapter 12
Pages 129-130
Answers: 1. A 2.B 3.B 4.B 5.B
Pages 129-130 Questions
1. What is the significance of the critical distance in microphone placement for recording singers and pianists?
2. How do engineers typically position the singer and piano to minimize microphone bleed?
3. What is the role of the music stand in relation to the vocalist's microphones?
4. What is the rationale behind using a spot microphone in conjunction with a main stereo pair?
5. What adjustments do recordists make to find the ideal microphone placement for the A-B pair of omnidirectional microphones?
Answers:
1. The critical distance is significant because it helps find the best proportion of direct to reverberant sound for the recording project. By evaluating locations around this distance, recordists can achieve a balance that enhances the presence of the singer's voice in the recording.
2. Engineers often position the singer on the opposite side of the raised lid of the piano, facing forward into the hall. This configuration allows the lid to act as a barrier, reducing the amount of sound bleed between the microphone locations.
3. The music stand is positioned in such a way that it prevents reflections from entering the microphones. The microphones are set behind the stand, angled appropriately, and placed at a distance that allows for optimal sound capture without interference from the stand.
4. The rationale is to achieve a more natural balance between the singer and the piano. By placing the spot microphone closer to the singer, the recording can create a playback image where the piano is perceived to be slightly farther away than the voice, enhancing the overall sound quality.
5. Recordists adjust the A-B pair of omnidirectional microphones by moving them back and forth and up and down until they achieve the desired symmetry between the vocalist and the accompanist. This careful positioning helps to create a balanced sound in the recording.
Pages 130-131
Answers: 1. A 2.A 3.A 4.A 5.A
Pages 130-131 Questions:
1. What challenges do engineers face when recording a violin and piano together in a typical concert arrangement?
2. How does the positioning of the violinist affect the recording quality?
3. What is the significance of the critical distance in recording?
4. What alternative arrangement do some recordists use when recording a violin and piano?
5. What role do cardioid spot microphones play in the recording process?
Answers:
1. Engineers face the challenge of capturing a balanced perspective when the violinist stands adjacent to the keyboard. A single pair of stereo microphones centered between the violinist and the tail of the piano often results in an unbalanced stereo playback image, with the violin dominating the left side and the piano filling the right.
2. The positioning of the violinist is crucial for recording quality. If the violinist stands in the usual concert position, the microphones may not capture direct sound from the violin, as the instrument's soundboard faces the hall. Moving the violinist to a position where the soundboard points into the hall allows for better direct sound capture from both the violin and piano.
3. The critical distance is significant because it helps engineers determine the most suitable distance and height for the microphones to achieve a balanced stereo image. Experimenting in the vicinity of the critical distance allows for adjustments that can enhance the recording quality by blending the sound from both instruments effectively.
4. Some recordists abandon the typical concert arrangement and opt to face the violin towards the piano. In this setup, they place the main stereo pair of microphones between the instruments, favoring the smaller sound source, which allows for a more natural stereo image and better balance between the violin and piano.
5. Cardioid spot microphones may be added to both instruments if the recording lacks the desired presence or intimacy. These microphones help to capture more direct sound from each instrument, enhancing the overall quality of the recording and providing a better blend of intimacy and room sound.
Page 131 True-false
Answers:
1.True 2.True 3.True 4.False 5.False
Pages 131-132
Answers:
1. D 2.A 3.C 4.C 5.B
Page 132 Questions:
1. What is the primary way brass instruments, such as trumpets and trombones, radiate sound?
2. Why do many recordists prefer to place microphones at least a meter in front of trumpets and trombones?
3. What microphone types do recordists often choose for capturing the sound of trumpets and trombones, and why?
4. How does the bell position of the French horn affect microphone placement?
5. What is the effect of on-axis microphone placement compared to off-axis placement for brass instruments?
Answers:
1. Brass instruments radiate sound from their bells, with higher frequencies propagating directly forward and lower frequencies spreading out over a wider angle.
2. Many recordists prefer to place microphones at least a meter in front of trumpets and trombones to allow the range of emitted frequencies to coalesce before the soundwaves strike the diaphragm, achieving a balanced frequency spectrum.
3. Recordists often choose large diaphragm condensers or ribbons because these microphones have a more subdued sound and help tame some of the piercing aspects of the tonal qualities emanating from trumpets and trombones.
4. The bell of the French horn normally faces away from the audience, so a microphone position in front of the player presents the instrument from the listener's perspective, while a second microphone behind the player can provide more detail and clarity.
5. On-axis microphone placement produces a much brighter, sometimes piercing tonal quality, but sacrifices the warmth created by the more balanced radiation captured off-axis.
Page 132 True-false
Answers:
1. False 2.True 3.False 4.False 5.False
Chapter 13
Page 135
Answers: 1.A 2.B 3.C 4.B 5.B
Page 135 trio true-false
Answers:1. True 2.False 3.True 4.False 5.True
Page 136 true false
Answers:
1.False 2.True 3.True 4.False 5.False
Chapter 14
Pages 137-138
Questions:
1. What microphone technique is used by Simon Eadon in his recordings of Marc AndrГ© Hamelin?
2. Describe the microphone setup used in the video made at the University of Surrey's Institute of Sound Recording for capturing Ravel's Sonatine, No. 2.
3. What is the size of Henry Wood Hall, and how does Eadon capture its acoustic?
4. What additional equipment did the engineer use to augment the room's ambience during the recording session?
5. What was the purpose of raising the lid of the piano higher than usual during the recording session?
Answers:
1. Simon Eadon uses a spaced pair of Schoeps mics (MK 2S unidirectional capsules) for his recordings of Marc AndrГ© Hamelin. The microphones are positioned 23 centimeters apart, pointed at the tail of the piano, and angled slightly down from a height of 168 centimeters.
2. The microphone setup for capturing Ravel's Sonatine, No. 2 included two AKG C414s in Blumlein configuration over the hammers, two AKG C414s in a midside array facing the lid of the piano, a ribbon mic below the tail end of the soundboard for warmth, and a pair of Schoeps mics (MK 2H unidirectional capsules) spaced about 30 centimeters apart for room ambience.
3. Henry Wood Hall is approximately 10 meters high, 20 meters wide, and 33 meters long. Eadon captures its spacious acoustic in a way that retains clarity while also providing a great deal of warmth.
4. The engineer augmented the room's ambience with a Lexicon 480L external reverb unit during the recording session.
5. The lid of the piano was raised higher than usual by a 3-foot (0.9 meter) rod to enhance the sound projection and capture a better acoustic quality during the recording session.
Page 138
Answers:
1.A 2.C 3.C 4.A 5.B
Page 139-140
Answers:
1.A 2.A 3.B 4.C 5.B
Chapter 15
Page 143
Answers:
1.D 2.B 3.A 4.A 5.C
Page 143 Questions
1. What challenges do musicians face when trying to perform music from the sixteenth to nineteenth centuries in modern settings?
2. How does modern studio technology help in achieving a historically informed performance?
3. What specific recording project is discussed in the chapter, and who were the key contributors?
4. What was the focus of the pre-production stage of the recording project?
5. What is the significance of the acoustic design mentioned in the chapter?
Answers:
1. Musicians often find themselves without access to suitable rooms for tracking, and large churches, which have become favored venues for recording early music, are often too reverberant for much of the repertoire, particularly for solo songs accompanied by quiet instruments like the lute, guitar, or harpsichord.
2. Modern studio technology, particularly convolution reverbs based on impulse responses from smaller historic rooms, can simulate the aural sense of the modest chambers where the music was originally performed, allowing for a more authentic listening experience.
3. The chapter discusses the recording of Tomaso Albinoni's 'Amor, sorte, destine' from the album 'Secret Fires of Love' (2017), featuring tenor Daniel Thomson and harpsichordist Thomas Leininger, with production by the author and recording by Robert Nation.
4. During the pre-production stage, Daniel Thomson, Thomas Leininger, and the author finalized the interpretive strategies that would be employed for the recording, ensuring a historically informed approach to the performance.
5. The acoustic design aimed to give listeners the impression that they are sitting in the same small room as the performers, enhancing the authenticity and intimacy of the listening experience for the early eighteenth-century cantata.
Page 144
Answers:
1.A 2.B 3.C 4.B 5.A
Page 144 Questions:
1. What are the older principles of interpretation in historical performance compared to modern practices?
2. How does Daniel's performance style reflect historical practices?
3. What techniques does Daniel employ in his performance of 'Amor, sorte, destine'?
4. What role do pauses play in historical singing practices according to the document?
5. What did Francis Clement explain about the rationale behind adding unrotated pauses in singing?
Answers:
1. Older principles of interpretation differ considerably from those currently used by classical musicians. They require reconstructing practices from surviving sources of information, allowing for a fresh approach to Baroque vocal works.
2. Daniel adopts the persona of a storyteller and uses techniques of rhetorical delivery to recreate the natural style of performance that listeners from the era would have heard. This requires him to alter written scores substantially.
3. In 'Amor, sorte, destine,' Daniel treats the recitatives and arias differently, emphasizes important words, alters tempo frequently, restores Inessa di uoce, contrasts tonal qualities of chest and head voice, and applies portamento.
4. Singers of the past inserted grammatical and rhetorical pauses to compartmentalize thoughts and emotions, allowing listeners time to reflect on what they heard. This frequent pausing helped convey the meaning of the text.
5. Francis Clement explained that adding unrotated pauses relieves the breath, allows the meaning to be conceived, delights the care, and satisfies all the senses.
Page 145
Answers:
1.B 2.C 3.A 4.D 5.A
Page 145 Questions:
1. What is the significance of vocal timbres in conveying emotions according to the document?
2. How did David Ffrangcon-Davies view the tonal palette in singing during his time?
3. What is the purpose of inserting grammatical or rhetorical pauses in singing as described in the document?
4. What is the difference between accent and emphasis in singing as explained in the document?
5. How does the application of accent and emphasis contribute to the delivery of complex ideas in singing?
Answers:
1. Vocal timbres are significant in conveying emotions as they allow singers to differentiate their registers and link timbre with emotion. The document states that the greater the passion, the less musical the voice that expresses it, indicating that the emotional quality of the voice is crucial for effective communication of feelings to listeners.
2. David Ffrangcon-Davies viewed the tonal palette in singing as versatile, which prevented monotony in performances. He criticized the new monochromatic approach to timbre, suggesting that it led to a situation where if an audience heard a singer in one role, they had essentially heard that singer in every role.
3. The purpose of inserting grammatical or rhetorical pauses in singing is to organize and pace the content of the poem, making it easier for listeners to grasp the story. These pauses allow singers to change the speed of delivery to match the emotional character of the phrases, enhancing the overall performance.
4. Accent refers to the stress placed on a single syllable to distinguish it from others in a word, while emphasis refers to the force of voice laid on an entire word or group of words to highlight associated ideas. Proper accentuation adheres to normal pronunciation, whereas emphatic delivery varies based on the meaning performers wish to convey.
5. The application of accent and emphasis creates light and shade in delivery, helping singers project the meaning of long and complex ideas clearly. It prevents monotonous delivery by allowing performers to arrange words by importance and achieve a distribution of emphases, which enhances listener comprehension.
Page 146
Answers:
1.A 2.B 3.A 4.B 5.D
Page 146 Questions:
1. What process did Daniel follow to create a dramatic spoken reading of the poem after analyzing the song's ten?
2. How did singers in earlier times differ in their approach to performing music compared to modern singers?
3. What did composers of the past typically notate in their music, and what was the implication of this practice?
4. What did Nicola Vicentino and Andreas Omithopardtus express about the limitations of musical notation?
5. What was the perspective of Domenico Con'i regarding the performance of music as noted in 1781?
Answers:
1. Daniel decided on the pauses to be employed, emphasized certain ideas, and varied the speed of delivery to suit the changing emotions of the text. He also noted where 'mess di voce' and 'portamento' occurred to ensure a natural sound in singing.
2. Singers in earlier times viewed their role as one of re-creation rather than simple interpretation. They personalized songs through modifications and understood that composers wrote songs skeletally, meaning they could not read the notation literally.
3. Composers of the past did not notate subtleties such as rhythm, phrasing, dynamics, and ornamentation. This implies that they did not intend to capture the performance elements that moved listeners, leaving performers with the responsibility to interpret and express the music.
4. Nicola Vicentino noted that some methods of singing could not be written down, while Andreas Omithopardtus praised singers for their ability to make notes longer or shorter than written, highlighting the flexibility and expressiveness required in performance.
5. Domenico Con'i candidly stated that singing an air or recitative exactly as commonly noted would result in a very inexpressive and uncouth performance, emphasizing the need for performers to interpret the music creatively.
Page 146-147 true false
Answers:
1. True 2.False 3.True 4.True 5.False
Pages 147-149
Answers:
1.A 2.C 3.B 4.C 5.B
Pages 147-149 Questions:
1. What were the main production techniques used in 'Amor, sorts, destine' to enhance period interpretation?
2. How did the roles of music director and producer benefit the project?
3. What considerations were made regarding microphone selection for the harpsichord?
4. What was the process followed after recording several takes of the cantata?
5. What was the significance of using Pro Tools IID for tracking?
Answers:
1. The main production techniques included blending the worlds of recording and live performance, using isolated sound sources recorded by closely placed microphones, and employing punch-ins to perfect excellent takes instead of completely rerecording sections with minor imperfections.
2. Having one person assume the roles of music director and producer allowed for a single conception of the cantata to emerge, guiding the various sonic possibilities available to the artists and engineers, and ensuring that decisions made throughout the process were cohesive and historically relevant.
3. The selection of microphones for the harpsichord took into account the instrument's bright tonal quality. It was important to choose microphones that would produce a natural stereo sound without exaggerating the brightness, leading to the choice of unidirectional microphones with a linear frequency response.
4. After recording several takes, the entire team listened to the recordings to determine which could become a master take. They decided to work on the two recitatives separately from the two arias, and after selecting the best recordings, Daniel and Thomas made minor corrections before the material was prepared for editing and mixing.
5. Pro Tools IID was used for tracking at 24 bits/96 kHz to provide an excellent signal-to-noise ratio and increased dynamic range, which were crucial for capturing the nuances of the performance and ensuring high-quality recordings.
Pages 149-151
Answers:
1.C 2.A 3.D 4.C 5.A
Pages 149-151 true false
Answers:
1.False 2.True 3.True 4.False 5.True
124