The First Fifth of
The Development of Western European Stringed Instruments by Ephraim Segerman
This book offers no new discoveries of historical evidence, but many new interpretations of the available evidence. It differs from previous histories in the questions I ask and how I try to answer them. The main questions I ask of the evidence are how the instruments (and their names) developed, and how they were tuned, played, made and sounded. Much historical information is not relevant to these quests, such as outstanding composers, players, makers, surviving instruments and repertoire, and so these issues are not discussed. Previous histories have answered most of the interesting questions for which the evidence clearly allows only single answers. The questions I ask go further, including those which may have more than one possible valid answer.
There is no objective way of choosing what questions to ask, but in evaluating answers, some ways are more objective than others. The usual way in the music world is to judge whether it makes sense in terms of what one believes one knows, and how much evidence it can explain. My approach, which is more objective, is to judge how well the answer (i.e. theory) can explain every bit of the relevant evidence. The difference is discussed in Chapter 1a.
Many of my suggested answers will be considered by some to be wild speculations, since the evidence does not unambiguously support them. I consider them to be valid theories (because they can reasonably explain how all of the evidence came to be what it is), and I accept that other valid theories could possibly answer the questions just as well. A collection of valid theories answering each question would be a much greater contribution to knowledge than concluding that the question is unanswerable (i.e. it remains a mystery) with the evidence available. In most cases, I strongly doubt whether there are many other theories with as much explanatory power as the ones I offer.
I have been trained as a scientist rather than as an historian or musician, so my indoctrination does not include the traditions of music scholarship, making my approach to how scholarship is performed different from most music historians in some crucial ways. One way is that I am more determined than most to avoid letting my enjoyment of performances of the music that I hear (or can imagine) to influence my judgements concerning the interpretation of evidence. It could be considered that this makes me rather old-fashioned. Westrop wrote that
“A friend of mine was once shown, by Johannes Wolf, a piece of old music. He remarked, innocently enough, that it ought to sound well, only to receive the austere reply: ‘I’m not interested in how it sounds’”.1
Wolf was a music historian, and apparently was aware that any attempt to judge how well it sounds involves modern criteria which would obscure the only relevant historical question about sound, which is how it was expected to sound when written.
Most music historians since Wolf’s time have been too interested in how it sounds to modern ears, considering that the major purpose of their work is to contribute to the enjoyment of music by the listening public (with the cooperation of early music performers). Making music enjoyable is the job of musicians, not historians. It is right and proper for musicians to exploit what the historians do when it suits their own purposes. But the music historian’s job is to contribute to building the most probable picture of the sounds of what was performed and enjoyed then (as well as the cultural environment around such performances), based on as full and objective an interpretation of all of the evidence as can be mustered. It is not the historian’s job to support the fantasy that many musicians and instrument makers have, that they are closely imitating the original musicians and makers, nor to support the fantasy that many listeners have, that they are hearing just what their ancestors heard. The historians have been giving this support by indulging in their own fantasy of expecting that the more historically accurate a performance of old music is, the more enjoyable it should be to modern ears. To protect this fantasy, they have avoided the pursuit of any scholarship that might reduce the attractiveness of the music, and by resisting acceptance of any research leading to such results.
Since development is my major theme, I pay rather more attention to chronology than most - what changes appear to have happened when. Thus, while most histories concentrate on characterising main-stream instruments, I am just as concerned with transitional instruments. Ordering the evidence in chronological sequence has allowed my suggested answers to development questions to include some detailed stages.
From my scientific training, I am more numerate than most in the field, and so am more likely to use quantitative methods whenever the questions are quantitative. These methods have mainly been applied to questions about the sizes of instruments, for which as a measure of size, I have focussed on the vibrating open-string length.2
There is an attempt here to be somewhat comprehensive, so an amount of rehashing of familiar material cannot be avoided. But I don’t adhere to the convention that the space given to each topic should be related to a judgement of its importance, either then or now. The space I give to each instrument is mostly related to how much I have to say about its main characteristics and its development.
In the current culture of music historians, it is commonly felt that the great scholars of the past have established a fairly complete broad picture of music history. This is no-doubt true. But it is very often also considered that what is not known are 'mysteries', not knowable with the evidence available, and that all that modern scholars can expect to do is to collect new evidence to fill in the details. Those with this view may not welcome my reinterpreting the evidence on some historical questions, considering that these issues were either settled ages ago, or if not, they should remain as mysteries. I am sure that the revered early scholars would have been much more generous in considering and debating alternatives. And they would have been more willing to correct their mistakes when pointed out to them.3
I never intended to write a book since my writing style is not as readable as most. My work involves trying to answer historical questions that haven't been adequately answered before. Since I could not have confidence that I had adequately considered all alternatives to my answers, initial publication in FoMRHI Quarterly, which always welcomed incomplete and controversial studies, has been appropriate. With the publishing of Quarterlies halted by the current leadership of FoMRHI, this no more provides an outlet for my work. Indeed, some of my interpretations published as Communications (Comms) in FoMRHI Q have had to be revised, but a surprising number of others have stood up to further consideration, and it would be appropriate to combine these into longer articles in a journal of record that has a wider distribution in academic circles. This view is apparently not shared by such journals. A recent such article offered to one of these journals was rejected on the basis that the ideas had previously been published in FoMRHI Q. It was suggested that publication in a book would be more appropriate. With both of these outlets effectively closed, I really have had no choice but to write a book.
Many people consider that a book has more authority than other scholarly publications. That is only true to the extent that the time it takes to produce a book is great enough for second thoughts to modify ideas and how they are expressed, and to correct errors. Every book I have seen that was supposed to be the final word on its topic has turned out not to be so. This one will most certainly not be the final word on any topic. This could be especially true here since many of the ideas presented have not had the luxury of previously appearing in FoMRHI Comms, and so they have not had the benefit of such further reconsideration.
If there is interest, I hope soon to produce an improved illustrated edition of this book. Readers are requested to help this improvement in any way they can, especially by informing me of whatever errors I've made (such as where there is evidence that contradicts any of my theories) and of omissions (such as alternative valid answers to the questions I have addressed).
I have greatly enjoyed putting this book together, and I hope that the reader will find the ideas presented stimulating, even if there is some reluctance to incorporate them into one’s views of instrument history. I expect few immediate converts to my methodology, but expect that there will be many more when fashions of thinking about scholarly questions in this field swing towards more logical and quantitative analysis, objectivity, and more respect for the evidence and for valid theories attempting to explain it.
May 2004, revised May 2006
The Development of Western European Stringed Instruments
Chapter 1a: Methodology - Approaches in scholarship
The scholarly method I follow . . . . . . . 9
The method usually followed in music scholarship . . . . . 10
Chapter 1b: Methodology - Determining the original sizes and pitches of instruments
Measuring pictures . . . . . . . . . 14
Size estimation from finger stretch . . . . . . . 16
Evidence from surviving instruments . . . . . . 16
Praetorius's pitch - organ evidence . . . . . . . 18
Praetorius's pitch - wind-instrument evidence . . . . . 22
Pitch and string length limits from string properties and Praetorius’s evidence . 24
The highest-pitch longest-length limit . . . . . 24
The lowest-pitch shortest-length limit - pitch instability and pitch distortion 26
The lowest-pitch shortest-length limit - inharmonicity . . . 27
TABLE: Gut string limits of pitch and string-length from Praetorius . . 30
Chapter 2: Performance practices: early compared to modern
Embellishment . . . . . . . . . 31
Fastest notes . . . . . . . . . . 33
Tempo . . . . . . . . . . . 34
Time alteration . . . . . . . . . 35
Note production on viols and voices . . . . . . . 36
Phrasing and style . . . . . . . . . 37
Standards of precision . . . . . . . . 38
Concluding comments . . . . . . . . 40
Chapter 3: Medieval stringed instruments before the 15th century
Ancient stringed instruments . . . . . . . . 41
Medieval instrument names and development. . . . . . 42
Cruit and crowd . . . . . . . . . 45
Harp . . . . . . . . . . . 46
Psaltery . . . . . . . . . . 47
Rotta . . . . . . . . . . . 47
Fiddle and gigue . . . . . . . . . 47
Jerome of Moravia . . . . . . . . 48
Other considerations . . . . . . . . 51
Rebec . . . . . . . . . . . 52
Symphony or organistrum . . . . . . . . 53
Citole . . . . . . . . . . . 56
Lute and gittern . . . . . . . . . 57
Monochord and keyboard instruments inspired by it . . . . . 59
Instrument construction . . . . . . . . 59
Some general points about instrumental music . . . . . 61
Chapter 4: Developments in the 15th century
General points, including developments on the harp, lute and gittern . . 62
New types of fiddles including those influenced by the lute . . . . 63
The demise of the lute in Spain . . . . . . . 66
Trumpet marine (and harp brays) . . . . . . . 66
Cetra . . . . . . . . . . . 67
Psaltery and dulcimer . . . . . . . . . 69
Chapter 5: The development of liras and sets of viols
Lira da braccio and lira da gamba . . . . . . . 70
From bowed vihuelas to viols and lironi in Italy . . . . . 72
Tunings of Italian viols in sets . . . . . . . 76
The development of viols in Germany . . . . . . 80
Early viols in Spain . . . . . . . . . 85
Sets of viols in France . . . . . . . . 86
Sets of viols in England . . . . . . . . 88
The construction and design of viols . . . . . . . 90
Chapter 6: Independent viols
Viols for vocal accompaniment . . . . . . . 93
Viola bastarda and lyra viol . . . . . . . . 93
Barytone viol . . . . . . . . . . 94
Viola d'amore . . . . . . . . . . 95
Division viol and violoncino . . . . . . . . 96
Miniature soloistic viols . . . . . . . . 97
Baroque and later double-bass viols . . . . . . . 98
Chapter 7: Renaissance and baroque fiddles
Ensemble fiddles in 16th century Italy . . . . . . 100
Soloistic fiddles in 16th century Italy . . . . . . . 103
Italian fiddles in the 17th century . . . . . . . 105
Large bass fiddles . . . . . . . . . 108
Fiddles in France and England . . . . . . . 108
Fiddles in Germany . . . . . . . . . 112
Chapter 8: Renaissance and baroque plucked fingerboard instruments usually strung with gut
Lutes with a single neck . . . . . . . . 116
Lutes with two necks (or with an extended neck) . . . . . 121
Vihuela and viola . . . . . . . . . 124
Four-course guitar (gittern) . . . . . . . . 126
Five-course baroque or Spanish guitar . . . . . . 127
Angel lute or angelique . . . . . . . . 129
Colascione, colachon and gallichon . . . . . . 131
Mandora and mandolin . . . . . . . . 132
Wire-strung mandoras . . . . . . . . 135
Chapter 9: Renaissance and baroque plucked wire-strung fingerboard instruments
Citterns in the first half of the 16th century . . . . . . 137
Citterns from the second half of the 16th century and from the 17th . . . 139
Sizes . . . . . . . . . . 139
Fretting . . . . . . . . . 140
Construction and design . . . . . . . 141
French citterns . . . . . . . . 142
German citterns . . . . . . . . 143 'French'-tuned English cittern . . . . . . 144
The Meuler steel revolution . . . . . . . 144
‘Italian'-tuned English cittern . . . . . . . 145
Guittern and late cittern . . . . . . . 147
5-course guittern and cithrinchen . . . . . . . 148
English guitar or cistre . . . . . . . . 148
Archcittern . . . . . . . . . . 151
Bandora and orpharion . . . . . . . . 152
Polyphont . . . . . . . . . . 156
Appendix 1: Development influence charts
Descendants of the earliest stringed instruments . . . . . 158
Development of the more modern bowed instruments . . . . 159
Development of the more modern plucked instruments . . . . 160
Appendix 2: Some topics involving string and wood technology
Polnische geigen, fingering past the fingerboard and violin fingerboard length . 161
Twisted and roped gut strings, and catlins . . . . . . 162
Modern misconceptions . . . . . . . 165
Bowed strings, bridges, soundposts and bass bars . . . . . 166
Sound absorption by creep in strings and instruments . . . . 167
Moisture content and swelling in gut strings and wood . . . . 168
The maturing and ageing of wood . . . . . . . 169
Peg fitting . . . . . . . . . . 171
Index . . . . . . . . . . . . 173
The Author . . . . . . . . . . . 178
The Development of Western European Stringed Instruments
Chapter 1a: Methodology - Approaches in scholarship
The scholarly method I follow
Everyone agrees that the purpose of scholarship in any field is to create knowledge, which is composed of evidence and theories. Evidence is the raw material worked with, and theories are generalisations that provide explanations of the evidence and apply beyond it. The basic criterion for acceptance of a theory is that it is consistent with the evidence. Beyond such basics, methodology can vary. My approach follows that most commonly followed in the sciences, where a theory is falsified (proved untrue) when it cannot explain any piece of evidence in a way that has a reasonably acceptable probability of being true. A falsified theory then has to be either abandoned or modified to remove the falsification. Scientists design experiments to obtain the critical evidence that will falsify one or more of the competing theories. No theory can be proven true because it is always possible that a new piece of evidence will appear that falsifies it. Knowledge is not to be believed as true, but is to be trusted as the closest to truth that scholarship can offer with the evidence available and the valid (i.e. non-falsified) theories that have been offered.
The job of a scholar, while collecting all of the relevant evidence, is to imagine the various possible theories that might explain the evidence, and to eliminate those that are falsified by evidence that cannot be adequately explained by them. The remaining theories are then evaluated according to the probability of each one's least probable explanation of any piece of evidence. This is the criterion for how well a theory explains all of the evidence. If that probability is clearly much higher for one theory than for the others, it is chosen and added to current knowledge. If anyone suspects that this particular theory should not be the chosen one, he or she tries either to collect new evidence that could falsify it, or to create a new theory (or to modify a preexisting one) that explains all of the evidence at least as well as the chosen one does.
If two theories equally well explain all of the evidence, the simpler one is preferred. This principle goes by the name 'Occam's razor'. 'Simpler' is defined by having fewer assumptions unsupported by evidence. A common misapplication of this principle is to pre-judge how simple the theory should be and reject one that works just because it is more complicated than expected. Another common mistake is to interpret Occam's razor so that a simpler theory is to be preferred regardless of how comprehensively the competing theories can explain all of the relevant evidence.
The probability of a theory's explanation of a piece of evidence is a matter of judgement. Since scholarship is supposed to be as objective as possible, the effect of bias in judgement should be kept to a minimum. Bias is difficult to avoid in judgement, and to be able to come to conclusions in the process for scholarship, judgement at some point is unavoidable. The process of judging the historical probability of a theory's explanation of a piece of evidence resonates much less strongly with the unavoidable biases of previous expectations and almost-unconscious vested interests than the usual alternative - judging the probability of a theory being true. The former allows one to be more rational and fair, thus maintaining a higher level of objectivity.
Maximum respect for the evidence is basic in this approach to scholarship, so the evidence should have maximum control over the choice of theory. When one piece of evidence is apparently in contradiction with another, the theory must be able to explain both reasonably well. A theory that explains both without assuming error is preferred to one that assumes one is in error. But mistakes in evidence do occur, and conflict with other evidence is the only way to detect it. The mistake could be the result of the source's incompetence, bias or misunderstanding, or it could not be what it seems to be, or it could be an error in methodology or recording, or be deliberately misleading. A theory's explanation for it would present a scenario of how it could have become what it is, citing support from other evidence for similar problems in that source or similar ones. The probability of the explanation reflects how readily such a problem could occur, which makes falsification of theories by mistaken evidence remarkably rare. It is also very rare for sources to deliberately mislead.
Trust is usually given when we perceive that the probable consequences of not trusting are more undesirable than living with the perceived probability that the trust would be betrayed. For example, we trust doctors properly to diagnose and treat our illnesses, though we know that there is some probability that errors will be made. They have our trust because it is considerably more probable that our health will be preserved by trusting them than by following any alternative course. It is in this spirit that we trust the evidence (unless there is other evidence that suggests otherwise), and we trust that the chosen theory is as close to truth as is currently possible (unless we have some good ideas that could lead to replacing the chosen theory with another one).
Accepting that knowledge is no more than the best that scholars can do with the evidence available and the theories they had been able to dream up, with no objective way to determine how close to truth the theories might be, there is no inhibition to forming theories when there is very sparse evidence indeed. This allows a large majority of realistic questions historians want to ask to have answers in theories. With little evidence, the decisive evidence that falsifies some theories might be lacking, but having a set of possible theories is a better position to be in than keeping these questions open as mysteries to which there are no possible answers.
The methods usually followed in music scholarship
In music scholarship, theories are usually treated quite differently. Training does not include the falsification of theories by contradictory evidence, so there is no way to clearly disprove any theory with decisive evidence that invalidates it. Since there is no recognised objective criterion for settling disputes, controversy is rather futile and so is frowned upon, with gentlemanly behaviour the rule. The approach is to try to 'prove' that a theory is true by arguing rhetorically that the theory must be true because of the impressive amount of evidence that is consistent with it. Some evidence can be mistrusted, requiring 'confirmation' by other evidence to be taken seriously, so evidence that contradicts one's theory can easily be rejected as probably wrong, without taking responsibility for showing how it could have become wrong. A theory is incorporated into knowledge when the leading authorities in the field are convinced of its truth and include it in their books. When they are not convinced of the truth of a theory, they usually just ignore it, but when pressed for a response, the reaction is 'it is not proven'.
The main model for this process appears to be the law. The law aims to make quick clear decisions punishing (and deterring potential) wrongdoers or settling disputes within currently acceptable criteria of fairness. Since there is lots to gain or lose, much testimonial evidence is expected to be false or misleading. The weight of evidence and its trustworthiness are paramount. Evidence is readily rejected if one can raise any doubt about its reliability, and the decision is made on the basis of judgement by a jury or judges. That judgement is considered to be the proof. Since the outcome of the proceeding is very often dependent on judgements concerning the truth of evidence, it is strongly influenced by the persuasiveness of the performances of witnesses and advocates.
Because the knowledge that music scholars produce is strongly based on judgement and consensus, it is subject to change when a new generation wants to make its mark, and thus knowledge is a creature of fashion. Recent fashions have been deconstruction (which attempts to 'debunk' accepted judgements), and the politically-correct promotion of the contributions of women and members of minority groups. Whether truth could or should be a matter of fashion is a matter of debate. Post modernism is rather popular in our modern culture, and it postulates that there is no truth other than what is believed to be true. No distinction is made between subjective truth, which is what is believed, and an objective truth, which is a reality out there that is independent of what anyone thinks about it. Having unreserved conviction that one's insights are true is an advantage for success in many fields, including being a musician. Many music scholars have it too, and their scholarship is what they do to convince others of this. A statement attributed to Howard Mayer Brown4 (though I know of no evidence that he followed it) illustrates this: 'Musicology is what you indulge in when you know something is true, and have to go out and prove it'. To members of this school of music historians, the only function of evidence is to bolster their claims.
Other schools accept that there is a truth independent of what is believed to be true. One is a school that is at the opposite pole, being skeptical about most theories. The members of this school will only give acceptance to a theory if it is an unambiguous consequence of the evidence. Theories that cannot qualify (including those that are needed to answer most of the interesting unanswered historical questions) are to be avoided. Scholars in this school (and H. M. Brown was an outstanding practitioner) are renowned for collecting evidence and presenting it in useful ways.
The majority of music scholars I've met belong to a third school. They believe that scholarship produces truth that they can believe, and they seek answers to the important historical questions. The only way to achieve both of these objectives is to rely on consensus amongst their peers as the criterion for worthiness of belief. They may be somewhat independent thinkers in their own narrow fields of study, but otherwise, they follow the crowd. To them, the evidence and theories that are agreed to be true by the consensus are considered 'facts', and theories that don't have such agreement are considered 'speculations'. Speculations are opinions that everyone is entitled to have and promote, no matter how well or poorly they can explain the evidence. This seems quite appropriately liberal-democratic, but liberal democracy only works if there is controversy and free debate. But in this field, these are discouraged as unseemly. Speculations offered by respected scholars that are not challenged for some time slide into being considered facts, and are thus added to knowledge that can be believed in. Once a theory is so accepted, any new competing theory faces an enormous struggle just to be considered seriously.
Professional success in any field depends on communication skills, charisma and a reputation for competence. In a field like this one, that finds controversy embarrassing, anyone not superbly endowed in such ways who disputes issues that are considered settled, or engages in disputes of any sort, will be mistrusted and considered a loose cannon that might lower public respect for the field.
Some questions in music history present difficulties for music scholars when the surviving evidence conflicts with their aesthetic understanding of the music (i.e. when the objective truth indicated by the evidence conflicts with subjective truth, which is strongly related to aesthetic expectation), so these areas are consigned to the category of 'mysteries'. One example of this is the level of improvisatory deviation from the written music in early performances. Another, which I will discuss now, is the history of tempo standards:
It is a tribute to the objectivity that musicologists can muster is the general acceptance that till well into the baroque, contrary to modern practise, tempo markings referred to tempo standards. When some of the evidence concerning what those standards were was discussed by eminent musicologists in the 20th century, some of it was disbelieved and some misinterpreted. The problem was that when the musicologists performed the music at the tempi indicated by the evidence, it moved much more slowly than they expected, and they could no more enjoy it or understand it in the way they were used to. They couldn't imagine how it could possibly ever have been appreciated that slowly. Musicologists seem to have convinced themselves that their understanding of the surviving music, which includes aural acceptability, is objective truth, and so evidence indicating otherwise cannot be trusted and must be wrong. That is probably why the topic was not seriously studied till my two papers published in 1996, where I analysed all of the evidence I could find up to 1700,5 and I linked all of the evidence in a theory explaining the evolution of tempi from the beginning of mensural notation. I consider this to be my most important contribution to music history.
I thought that the papers would be of interest to early music performers as well as musicologists, so I submitted them to a journal they both read. There was a question as to whether the papers would be accepted for publication, since the journal's policy has always been advocacy for the early-music movement, and my results contradicted the tempo assumptions of the movement, which were at least twice as fast. There was only one response to my study, a highly critical one, but the editor didn't publish it because she wanted to avoid an extended debate. To accommodate her concern for brevity, I suggested that my critic and I presented single position statements from each of us in full knowledge of what the other was writing, and she accepted the suggestion. We submitted the position papers, but they were never published. This was probably because he could not fault my analysis according to historical criteria. Tempo history will continue to be considered a mystery by the field until either someone can find an interpretation of the evidence that is felt to be believable (hoping that new evidence emerges which contradicts that which is known), or a new generation of music scholars demands more objectivity in their field. The study of music history is often primarily considered to be a service to our current music culture, with much less interest in it as an application of general principles of scholarship to the evidence on the history of music.
In my papers since, I have been promoting (with some resistance from editors) my more disciplined approach (based on how scientific scholarship is performed) to bringing more objectivity into the way theories relate to evidence in music history. I have had no indication of success in convincing others, (and have noticed a deterioration in my acceptance in the scholarly community). This is to be expected, especially in a culture in which the need for re-examination of one's ideas is rare. If one is satisfied with how one does one's job, one's colleagues agree, and then someone from a different tradition comes along with a different set of rules for doing it, claiming that those rules would make the job better, it is much simpler to dismiss him as a crank rather than to seriously consider the issues raised.
Whenever a new conclusion presented here differs from what people in the field prefer to be true, it is likely to be widely rejected as 'unproven', or just ignored.
The Development of Western European Stringed Instruments
Chapter 1b: Methodology - Determining the original sizes and pitches of instruments
Size clearly affects what an instrument sounds like. I focus on the vibrating string length as a good measure of instrument size since it can be related to string properties and pitches. It can be estimated from interpreting measurements of pictures, surviving early written measurements, the evidence on surviving instruments, finger stretches in surviving tablature music and the reported pitches that the strings were tuned to (converted to pitch frequencies by knowledge of the history of pitch standards).
Estimating an instrument's string length by measuring it in an undistorted picture is straightforward if the strings are close to being parallel to the plane of the picture, and for comparison, there is something else of known dimension at the same apparent distance from the viewer. Let us call Sp the string length measured in the picture, Rp the apparent length of the reference object measured in the picture, Rf the known full-size real length of the object and Sf the full-size string length we want to find. Then by proportionality, Sf = Rf(Sp/Rp), where / means divide, no symbol means multiply and parentheses () enclose values that are calculated before being multiplied or otherwise operated on by what is outside. The most obvious reference object in a picture of an instruments being played is some dimension on the player. One possibility is total height, which for a fully grown male could, I would suggest, be about 160 cm, with perhaps an uncertainty of maybe 20%. Another possibility would be a dimension of the head, which could be better because we have reason to expect that variation in head size would be less than variation in total height. But hair styles and head clothing usually obscure direct measurement of head size in the pictures, so I mostly use visible components of head size, namely the distance between the eyes or the distance between the mouth and the centre between the eyes, whichever distance line is closest to being parallel to the plane of the picture. From averages in a small study performed on my acquaintances, I use 6.2 cm for both of these reference dimensions, and expect that the uncertainty would be about 15%.
The above assumes that the artist accurately depicted what would be realistic, as if the picture was a photograph. Even those pictures that look photographic could be distorted for various reasons. One is that the artists often worked from pattern books rather than copying from life, and they could have altered pattern-book designs with other design components from memory, forming unrealistic hybrids. Another is that the artist could contract, expand or distort items in a picture to give a more desired visual balance, to fit into a space or to provide emphasis for symbolic or other purposes. Some instrument depictions are unrealistic since they would not work as musical instruments as shown (but we should be careful about assuming this because some instruments could work in ways that we are not familiar with). And, of course, some pictures have been changed by over painting at a later time to update the subject matter or to attempt to 'restore' it.
Clearly unrealistic depictions can sometimes be due to amateur incompetence of the artist, or it was intended to be an instrument of fantasy. The latter can become evident from what the picture appears to represent. In early times, artists were not respected for their creativity, and their objectives were to meet the expectations of the people for whom they made the pictures, which were to depict reality when there was no good reason to do otherwise. In many cases, the artist was trying to be more realistic than a photo-like image would be, by twisting design components around to show their most interesting and informative aspects. Picasso said that art is the lie that reveals the truth. How the lie reveals the truth involves using a visual language that needs to be understood by the intended viewers, who in earlier times would usually have been members of the affluent classes, and with modern art they are the artistic cultural elite.
When trying to use pictures for measurements, it is necessary to be aware of the methods, the culture and the probable objectives of the artists when making the pictures. When choosing an instrument for measurement, one should first survey other pictures of people playing what seems to be the same instrument, and pick those that appear competent, undistorted and typical.
I have encountered some historians that argue that there is no technical information of value to be had from early pictures of instruments. There are makers who research and service the instrument needs of early-music performers, and they routinely scale the dimensions of surviving instruments in their 'copies' to meet customer requirements. They emphasise uncertainties in scholarship (as a basis for rejection) when the evidence indicates that a typical historical instrument characteristic, such as size or pitch level, differs from what is considered normal in the modern culture they share with their customers. It certainly is difficult to distance oneself from the culture of music and history we are immersed in, and to try to be fair about scholarship that challenges any of it. Training in scholarship should develop such objectivity (learning to distinguish between subjective and objective truth), but it rarely goes beyond encouraging trainees generally to be skeptical. Skepticism and cynicism, generally popular nowadays, are towards any claim of authority, but rarely towards one's own judgements. In this spirit, many of these historians can only accept studies that use evidence and techniques that they have been trained to handle (based on surviving instruments), and reject other evidence and techniques that are at least as relevant to their conclusions.
The other group is scholars with the responsibility for cataloguing collections of instruments or pictures6, who agonise about how sure they can be about the information they put in their entries. I sympathise with their predicament. They are expected to produce catalogues that are authoritatively correct, and they feel that any needed subsequent modifications could raise doubts about their competence. The modern history of historical scholarship shows that almost all of the studies of topics that attempt to be complete and definitive are not, often needing modification after publication. One can't write anything that is 'fireproof', and it is best to accept the disappointment with equanimity when one has missed something. I am not sure which is sadder, being able to convince oneself (with the hope of convincing others) that one has achieved the wanted perfection, or realising that one cannot meet the standard of perfection that one thinks is expected. History is an ongoing research project approaching truths, not a collection of truths.
Size estimation from fingering stretch
There has been some controversy about the size of the English cittern used for playing the solo repertoire published by Holborne and Robinson and in surviving manuscripts from that period (c.1600). The contenders are the small English cittern depicted by Praetorius with a string length of 35 cm and the smallest size of surviving Italian citterns with a string length of about 45 cm. A way to estimate the maximum string length is to find the biggest stretch indicated by the tablature in the repertoire, and compare it with an assumed maximum stretch for a average hand. I have assumed that my hand is of average size, and my maximum stretch between the first finger on a barré and the stretched-out little finger is about 11.5 cm.7 If we assume equal-temperament fretting for simplicity, and n is the fret number of the index finger and m is the fret number of the little finger, then the stretch = string length times (2-n/12 - 2-m/12), where the higher symbols are powers to which 2 is raised (the calculation can be done on any school scientific calculator). In the case of this repertoire, the biggest stretch occurs between a barré on the 2nd fret and the little finger on the 9th fret. It occurs in a printed book of cittern lessons, so it is unlikely to be intended for a player with a particularly large hand.8 Then the maximum string length = 11.5/(2-2/12 - 2-9/12), or 39 cm. This is enough less than the 45 cm string length of of the majority of Italian citterns to strongly favour the small English cittern.
This approach is useful in estimating the string length of the lyra viol used to play Corkine's (1610) tablature. The greatest stretch is between the 1st and 5th fret. The maximum string length then calculates to 59 cm. That was the size of a tenor viol. This should be compared to the lyra viol played later in the 17th century. Mace's (1676) lyra viol music had the greatest stretch from the 3rd and 7th fret. The maximum string length calculates to be 66 cm. By the end of the century, the Talbot ms (c.1694) indicated that the string length of a lyra viol was 71 cm. This indicates that the most valued sizes of lyra viols increased during the 17th century.
Evidence from surviving instruments
Surviving early instruments are the most dramatic evidence of instrument history. They provide invaluable information on materials, making methods and details of design and construction. As with any other type of evidence, care needs to be taken in its interpretation. During an instrument's centuries of coexistence with people interested in music, it is most likely that there had been very many attempts to find out what it sounded like. If the sound was found interesting, it is also very likely that it was used for performances of some kind. For such performances, any deterioration in its integrity would probably have been repaired, during which it could have been modified to better suit the playing technique of the player. These modifications could easily be detected now if the repairs and alterations were either incompetent or in a very different style or it used different materials than the rest of the instrument, but some other modifications could be undetectable.
The question of which aspects of a surviving instrument are original is important for instrument historians. One often finds components of instruments that clearly had previously been parts of other instruments. There was a thriving 19th century instruments antique industry, mostly in Italy (the most famous firm that did this was that of Franciolini), in which parts of surviving instruments were used to create instruments that differed from the originals but were most in demand by the collectors. One remarkably influential overreaction to the uncertainties resulting from such fakes, has suggested that some well-known 16th century museum instruments that used worm-eaten wood or were composites were also later fakes.9 The problem with this suggestion is that good well-seasoned wood has always been highly prized by makers because of its greater stability. So wood with a few non-active worm holes that present no threat to structural integrity would gladly be used in making a new instrument (even by many modern makers), and parts of irreparable or redundant instruments were gladly recycled.
Instruments have been most likely to be discarded when they lost respect when fashions changed, and they had no function to perform in the new fashion. Many more instruments that could be used without modification survived than those that needed modification for use, while very few (other than those of high decorative value) survived if they had no musical use. The rate of loss with time decreased when they became uncommon and gained value as curiosities and antiques. When instruments came in different sizes, we can expect that the numbers of each size that survived were usually very unrepresentative of what they originally were.
Lutes, citterns and bandoras were very popular in Renaissance and baroque England, yet not a single example of any of these instruments made in England survives. Nevertheless, many dozens of English viols from then have survived. The vast majority of these viols were small bass soloistic ones that survived because they could be used later as cellos. Some original tenor viols have survived because they could be used as small cellos. Treble viol bodies have survived because they had been in demand from the late 17th to the 20th centuries for conversion to violas. Only one viol that approaches consort bass size (converted to a small double bass) has survived. When the playing of viol music in sets was revived late in the 19th century, the written evidence on original sizes was either unknown or disbelieved. The disbelief was because they thought they knew what the sizes of bass viols were from the predominant number of surviving ones of cello size. They then invented tenor and treble viol sizes by scaling down from the basses they knew. This new set of viol sizes, 20% smaller than the originals, became standard then, and they still remain standard in the current early-music culture.
The current viol culture does not deny the clear written evidence on original larger viol sizes, but modern sizes 'work' well at the modern early-music pitch standard of a' = 415 Hz, and original sizes would not because of excessive breakage of gut top strings. This issue is avoided as much as possible by scholars as well as musicians. The listening public would be annoyed (at least) if it was made aware of aspects of the performances it enjoys that are knowingly historically inaccurate, and it would not thank anyone who informed it of this. Very occasionally, musicians attempt to emulate the rich sonorous sound of viols of original sizes by playing the music on sets composed of modern tenors as trebles, modern basses as tenors and double bass viols as basses.
Many instrument historians (who are often makers) are so enamoured by the sound of music played on restored surviving instruments (or accurate copies) that they are much more willing to trust interpretations of measurements on such instruments than any other type of evidence. Surviving instruments are real, able to be appreciated by sight and touch as well as by sound, while pictorial and written evidence is, by comparison, very remote and lifeless. Subjective truth, associated with the perception of attractiveness, is confused with objective truth. When there is apparent conflict between a piece of evidence of each of two different types, these historians are biassed towards trusting the type of evidence that they are most familiar with, and tend to reject or ignore the other. I will illustrate this below with the interpretations of evidence on Praetorius's Cammerthon pitch by organ and wind-instrument specialists. This issue is important for my estimation of sizes of various stringed instruments.
If we apply the more objective approach I have outlined above, one has the obligation to present a scenario for how every piece of relevant evidence that is apparently inconsistent with what one's theory expects could have possibly became what it is. One cannot reject evidence because one does not trust it without presenting a good case based on other evidence for how it became 'wrong'.
Praetorius's pitch - organ evidence
Important examples of poor standards in current scholarship are concerned with the question of Praetorius's Cammerthon pitch standard10. Its frequency is essential for my calculations of limits on string lengths from nominal pitches outlined below. At the end of the book about instruments written by Praetorius11, after the Index, and before the list of errata, is a 2-page addition entitled only 'NB'12. It includes a diagram giving dimensions for making a chromatic octave of square wooden and round metal pitch pipes. The stated intention was to define his primary pitch standard, which he called rechten Chormass or rechten Thon, for organ makers and singers to tune to.13 This appears to have been a more precise version for organ tuning of the pitch standard he generally called Cammerthon (or the usual or rechte Chorthon).
In the 19th and 20th centuries, various scholars have used the dimensions specified to find the frequency of that standard either by measuring it from pipes made, or by calculating it directly from the physics of the air vibration in an open cylindrical organ pipe with a mouth opening. The earliest determination of Praetorius's pitch from the pitch-pipe diagram was a' = 423 Hz (0.7 semitones below modern) by A. J. Ellis14 in 1880. Early in the 20th century, A. J. Hipkins15 mistakenly assumed that the pitch standard represented by the pitch pipes was the Chorthon of Catholic churches that Praetorius preferred to his own, and so Hipkins assumed that Praetorius's Cammerthon was a tone higher than Ellis's determination, i.e. a' = 475 Hz.
The apparent origin of the modern early-music pitch standard of a' = 415 Hz is in Bessaraboff's famous 1941 book16. His suggestion was that, for practical purposes, we should approximate the original pitches with the closest pitches to whole semitone steps from modern a' = 440 Hz. Thus, accepting Hipkins's erroneous conclusions, Bessaraboff assigned Ellis's 423 Hz for Praetorius's Chorthon to a' = 415 Hz and his Cammerthon to a' = 466 Hz. He claimed that the Chorthon pitch 'is the tonality of the musical system of the classical period, which lasted from about 1600 until 1810-20'. We now know that this is a gross distortion and oversimplification17, but the grain of truth here is that Praetorius pitch of the pitch pipes continually remained as the usual standard for string ensembles in north and much of south Germany though the period stated.
One problem with Bessaraboff's proposal is that it was based on Ellis's determination of the pitch-pipe pitch. If he made the proposal later, when better determinations of the pitch (see below) indicated that it was up to 10 Hz higher, his pitches would have been 440 Hz for Chorthon and 494 Hz for Cammerthon. This highlights the other problem with his proposal, which is that the important pitch standards of the time fall near the middle of his semitone ranges, so a small shift such as this one is grossly amplified.
The great attraction of Bessaraboff's proposal to early musicians is that it blurs the picture enough that they can justify the use the same instruments for all baroque and classical music, including copies of the superior later-baroque French woodwinds which played at about a semitone lower than Praetorius's pitch.
Bunjes18 built a set of reproduction pipes with the resultant pitch being a' = 430 Hz, and Bormann19 did the same with the resultant pitch being a' = 427 Hz. Thomas & Rhodes20 calculated the pitch using the method of Ingerslev & Frobenius21, with the resultant pitch being a' = 426 Hz. D. Gwynn22 surveyed previous determinations and added his own corrections to that of Bunjes, which he considered most reliable, with the resulting pitch being a' = 433 Hz.
Organ historians try to follow an organ's pitch history by studying its records of repairs and alterations, and on each of the pipes, studying the nominal pitch names written on them, the styles of that writing, the signs of pitch alteration and the final pitches. When the pitch of an organ is changed, pipes can be shifted to be activated by different keyboard keys and their lengths can be shortened by trimming (or cutting scoops) or lengthened by adding an extension. Smaller changes can be made by widening or narrowing the tops of pipes. When a pipe was shifted to a new key, the new nominal pitch was sometimes marked. The trimming of pipe lengths can rarely be detected, nominal pitches on pipes are often missing and records of an organ's repairs and alterations are notoriously incomplete. Occasionally, original decoration on some pipes or the space inside an original organ case can put limits on some original pipe lengths. The original pitch of an old organ is usually estimated from the pitches of pipes with the earliest pitch-name markings that show the least evidence of alteration.
Some experts on German organs made in the 17th and 18th centuries make the generalisation that their original pitches tended to be at about a semitone above modern throughout that period. There is no question that this was the case late in the 17th century, but we are concerned with the situation on Praetorius's time, early in that century. One very highly regarded organ, in mostly original condition, is the 1616 Compenius organ in Frederiksborg. Its very unusual all-wooden piping resists the tinkering with pitch that metal pipes have always been subjected to, and it fits neatly into an original case so original pipe lengths couldn't have been longer. It appears to have been made originally at a pitch of about a semitone above modern, and Praetorius was consulted on its design. These experts are very impressed by the sound of this organ and its association with Praetorius, and so they are very skeptical about Praetorius's pitch-pipe evidence, which implies that his pitch standard was about a semitone lower than the pitch of this organ.
Praetorius wrote that most of the organs in his time were tuned to his pitch (Cammerthon or proper Chorthon), but that there also were many at a tone higher and lower, and 'not a few' a semitone higher.23 He mounted a spirited argument against the tendency in his time to raise the currently fashionable pitch to a semitone higher24. A likely scenario is that he lost the battle against the higher pitch for the Compenius organ, but hoped (vainly, it turned out) to win the war with the arguments in his book. That organ is the only one amongst the about three dozen organs he esteemed (listing their stop dispositions) that have survived well enough for modern researchers to be able to estimate what their original pitches were. The vast majority of his esteemed organs could easily have been at the pitch he specified. There are three other German organs that Praetorius could have known when writing the book that have had their original pitches estimated. We have no idea about what he thought of them. Two had the pitch of a semitone above modern, and one was approximately at modern pitch. Since the pitch of the first two remained in fashion later in the century, the probability of their survival would be greater than others. In conclusion, there can be no statistical case made from the pitches of the few early 17th century German organs estimated that the most prevalent pitch was different from what Praetorius claimed.
There is also written evidence indicating that the most popular organ pitch level early in the 18th century was a tone higher than Praetorius's pitch, and that it dropped by a semitone late in that century. This change in pitch recognition is not reflected in the general conclusions of the organ specialists. In my analysis (that accepts all of the written evidence), the fashion of German organ pitch changed as follows: Early in the 17th century (Praetorius's time) it was a semitone lower than the constant level assumed by the organ experts, it was at that level (a semitone higher than in Praetorius's time) later in that century (when Schnitger was the major maker), it went up another semitone around 1700 (to follow the pitch of the ancient organs), and it dropped a semitone about two-thirds into the 18th century. We would expect these organs to be at the organ experts' pitch levels by late in the 18th century. I would be very surprised if the organ experts can tell the difference between the pipes remaining where they were during all of the 18th century (which they claim) and their being shifted a semitone at the beginning of the century (with the longest pipes unused) and back again later in the century.
The two competing theories are that Praetorius's pitch was as deduced from his pitch-pipe diagram, and that his pitch was about a semitone higher, as usually found in the surviving German baroque organs. The subjective choice that is usually taken is to decide which evidence one trusts more. A more objective choice between them should depend on the relative probabilities of how well the pitch-pipe evidence can be explained assuming the higher pitch theory, and of how well the surviving organ evidence can be explained by the lower pitch-pipe theory. It was shown above that there is no statistical case for inconsistency between the surviving organ evidence and Praetorius's lower pitch-pipe pitch.
The organ specialists have not attempted to explain how the pitch-pipe evidence could be consistent with their higher-pitch theory, but a harpsichord specialist who supports that theory has attempted this25. He noted that Praetorius had neither specified the wind pressure nor the mouth dimensions of his pitch pipes, and he proposed that these could have been high enough to get a pitch a semitone higher. As a model, he picked a late 16th century Innsbruck organ with pipes having extraordinarily large mouth dimensions, which has been restored with an extraordinarily high wind pressure of 90 mm water column. Assuming room temperature, these parameters and Praetorius's dimensions, he got a good part of the way towards pushing the pitch up a semitone on a test pipe he made.
To support his theory that the mouth dimensions were larger than expected, he also presented the mouth dimensions and diameters of 19 pipes (marked with the same nominal pitch as one of the pitch pipes) from surviving German organs roughly contemporary with Praetorius (their lengths have most probably been altered, so that is not relevant evidence). I calculated the averages of the pipe diameters and the mouth dimensions. Assuming Praetorius's pipe length, a wind pressure of 75 mm water column (considered to be the maximum expected by a specialist on early German organs, who happens to advocate the higher-pitch theory for Praetorius's pitch) and the annual average temperature of 10 degrees Celsius in Praetorius's region in Germany (churches were not heated), I calculated the pitch of a pipe with the average mouth dimensions and Praetorius's diameter. The method of Ingerslev & Frobenius was used, with a slight correction for the average mismatch between their test pipes and their theoretical calculation.26 The result was a' = 437, 436, 435 and 434 Hz for the temperament being equal, sixth comma, fifth comma and fourth comma meantone respectively. If I use the average diameter of the pipes instead of Praetorius's diameter the results are 2 Hz higher. If I assume a wind pressure of 55 mm water column (like on the Compenius organ) instead of 75 mm, the results are 3 to 4 Hz lower.27 The uncertainty in the calculation method is about ± 6 Hz.
Thus the pipe information not given by Praetorius cannot provide an explanation of how the pitch-pipe evidence is what it is in a way that has a reasonable probability of being true. Koster seems to believe that just showing that a theory's explanation is a possibility is enough to give it validity.
Praetorius's pitch - wind-instrument evidence
The semitone-higher theory for Praetorius's pitch has been an article of faith amongst wind-instrument specialists since Anthony Baines suggested it in his famous book on woodwind instruments28. He wrote that "Recorders at Verona identical in shape and in size with those in Praetorius's scale drawings at 'chamber pitch', sound a good semitone above modern pitch; say about a' = 470". His criteria for being 'identical' must have been rather fuzzy since I (and others) have found that there is a systematic error in the sounding lengths of the recorders in Praetorius's drawing, so that as depicted, the pitch standard varies, with the smallest ones at a standard about a semitone lower than the largest ones.
We have reason to expect that a large fraction of the surviving wind instruments would sound about a semitone above modern because they were made in Venice, where they were played with organs, and that was the pitch standard of Venetian organs. Woodwind instruments made there were used extensively throughout Europe, and the woodwind specialists interpret this as suggesting that this pitch standard was largely universal (including the German regions Praetorius knew). This could well have been true for most bands of Venetian woodwinds, but the expectation of these specialists that this carried over to the pitch standards of string bands does not have any supporting evidence, and is unlikely because wind bands and string bands rarely played together (the difference in pitch standards probably was a factor). Praetorius's insistence that both types of instruments played at the same standard was very unusual for his time. A minority of surviving instruments (mostly transverse flutes and mute cornetts) were made at lower pitch standards, apparently for playing with stringed or keyboard instruments at lower standards.
The pitches of woodwind instruments other than recorders cannot be determined from Praetorius's drawings with enough accuracy to distinguish between the two theories a semitone apart. There is uncertainty concerning pitch-affecting factors that can't be seen, such as the plug positions on transverse flutes and the reed characteristics in reed-blown instruments, but in addition, the pitch can be varied rather more on them than on recorders by the way it is blown.
An instrument for which there are no uncertain pitch-affecting factors, except for how it is blown, is the trombone (or sackbut). From measuring the lengths of the vibrating air columns in Praetorius's drawings of the trumpet and 5 sizes of trombone, Steve Heavens and I have shown that, as expected, they played at the same pitch standard, if the method of blowing was the same.29 We then showed that the pitch reported by modern blowing of a surviving Nuremberg trombone contemporary with Praetorius (who preferred such a trombone), when scaled to the length of Praetorius's trombone, would sound just over a semitone higher than it would sound if a' = 430 Hz.30
Assuming the theory that the pitch deduced from the pitch-pipes is true, the only explanation for this result is that the modern style of trumpet and trombone blowing (the same in modern and early music ensembles) produces a pitch about a semitone higher than in the blowing style at Praetorius's time. Modern blowing technique is characterised by what has been called the 'keyhole principle', in which the vertical direction in an old-style keyhole (having a vertical slot with a wider round top) represents the possible pitches, and one blows to pitch at the round top. Above the round top, the pitch breaks into the next higher harmonic of the vibrating tube. At the round top, the sound is richest (with more contribution of higher harmonics to the sound quality), is the most resonant, and it is easiest to blow a stable pitch. This can be called 'playing on the resonances'. In that explanation, in the early style of playing the trombone and trumpet (in non-military circumstances), they were played about a semitone lower than at the resonances. The softer sweeter sound of playing lower than the resonances could well have been considered to confer a more vocal quality.
There is early evidence that supports the hypothesis that wind instruments that could be played off the resonances often did. The virtuoso music for 17th century trumpet includes short ornamental notes that could only be played by lipping both a tone above and a tone below their normal notes. The evidence on early reeds indicates that they were much stiffer than the reeds that modern players use in both modern and early music. Stiffer reeds transfer much of the control over pitch from the fingering to the lips, with more effort in playing and more concentration needed to play in tune. A good reason for normally lipping a semitone lower than the top of the pitch range for a note is that instruments that could imitate the vocal appoggiatura strove to do so (a modern equivalent is that instruments that can imitate the vocal vibrato, usually do so). According to Tosi31, the appoggiatura was a continuous slide in pitch. The slurring between two fixed pitches on keyboard and fretted instruments would be an inferior imitation. A practice of normally lipping below the resonance would give the continuous pitch range for lipping that accommodates the appoggiatura from above. This appoggiatura was a very important component in music performance from the middle of the 16th century onwards through the baroque and later.
If we allow ourselves the subjective luxury of judging the trustworthiness of the evidence, without accepting responsibility for having to present a reasonable case (based on other evidence) for what could be wrong with what we do not trust, then of course, we would prefer Praetorius's pitch-pipe evidence to be wrong, rather than modern lipping on the trombone to be wrong. We enjoy the music that modern early music groups produce. We want to trust that the wind players are playing in a reasonably accurate simulation of the original style, and would prefer to avoid considering that this may not be true. But the only admissible evidence for evaluating an historical theory should be historical evidence, and we should maintain the scholarly discipline of considering this modern evidence to be historically irrelevant. The very popular expectation of early musicians that an instrument of authentic design will automatically lead the player to authentic performance practices is pure fantasy.
A majority of the people presently interested in Praetorius's pitch are organ and wind-instrument specialists, who make broad generalisations about original pitch standards from pitch evidence collected within their specialisms, ignoring other kinds of evidence. They believe that it was a semitone higher than modern. That theory remains falsified by Praetorius's own way of communicating that pitch, the pitch-pipe evidence. The evidence of the limits on the relationship between string-length and pitch (the theory of which is given below), as given for gut strung instruments in this book, is consistent with the pitch given by Praetorius's pitch pipes, and not with the semitone-higher theory.
Pitch and string-length limits from string properties and Praetorius's evidence
String physics can relate the vibrating string lengths of instruments to the range of pitches that the strings can be tuned to. The highest string has to last long enough for the musician to get on with making music, and the lowest string needs to sound well enough to be musically useful. The breaking stress (i.e. tensile strength) of the string material is closely related to string longevity, but how close to the maximum stress that a string can 'safely' be tuned to is a matter of judgement, which could (and has) varied in different historical circumstances. The deterioration in the sound of strings made of any particular material as they get thicker and are tuned to lower pitches is largely understood in terms of inharmonicity (loss of harmonics, leading to loss of pitch focus and dullness of sound), pitch distortion (sharpening on fretting) and pitch instability (the variation of pitch with changing vibrating amplitude), but again, how bad is too bad is a matter of judgement in the culture of the time. I will quantify what these judgements were for gut-strung instruments by analysing historical evidence, and then present a table of acceptable ranges. Rather rougher estimates of the ranges of metal strings will also be made.
The highest-pitch longest-length limit
For a uniform string, according to the Mersenne-Taylor Law, the fundamental pitch frequency (f) of a string times the vibrating string length (L) equals half the square root of the string stress (S) divided by the density (ρ), or fL = (1/2)sqrt(S/ρ). Stress in a string is defined as the stretching force (tension) divided by the cross-sectional area. The tensile strength of the string material is defined as the stress at which breaking occurs. The tensile strength of plain metal strings can depend on diameter since the process of drawing a wire through successively smaller die holes introduces dislocations in the structure that inhibit the crack propagation that is necessary for breaking. In fresh well-made gut strings, the tensile strength depends mainly on the average angle between the gut fibres and the string axis. That angle results from the twist that is put into the string when it is made. For maximum strength in thin treble strings tuned near the breaking stress, they have normally been made with the minimum twist necessary to produce cylindrical strings out of the few membrane-like pieces of gut each is made from. These are called 'low-twist' strings.
We can then consider that there is a maximum working stress for a treble (low-twist) gut string that represents the stress at which the rate of string breakage of the highest-pitched string is just tolerable. With strings of the same material, density is constant, so we can consider that there is a highest acceptable product of the frequency and the vibrating string length, or 'fL product', which is proportional to the square-root of the highest acceptable stress. Some musicians find it difficult to accept that gut string breakage depends only on the string length and frequency, and not on the diameter and tension. They associate higher pitches with thinner strings and expect that a thinner string can go to a higher pitch. But if they did the experiment of tuning a low-twist gut violin 1st string until it broke and then did the same with a low-twist gut violin 2nd, they will find that the 2nd will break at a much higher tension, but the pitches at breaking would be as close to the same as can be expected from the variability of a natural product.
To determine the maximum fL product tolerable in a historical period, we need to consider instruments that push the pitch limits by having an exceptionally large open-string range, and for each we need to know simultaneously its vibrating string length, the nominal pitch of its highest string and the pitch standard that applied to that nominal pitch. A source that provides all of this information is the book Syntagma Musicum II by Michael Praetorius.32 Scaled drawings of most of the instruments discussed in the text provide the vibrating string lengths, tables of tunings provide the nominal pitches, and the basic pitch standard used is defined by the speaking lengths and cross-sectional dimensions of diatonic octave sets of cylindrical and square pitch pipes, indicating that his standard was about a' = c. 430 Hz33.
The gut-strung instruments in the book with the large open-string ranges are the lute in chorthon (2 octaves + 5th on 61.8 cm), the short neck of the Paduan theorbo (2 octaves + 4th on 97.2 cm), the large 5-string bass viola da braccio (2 octaves + major 3rd on 75.0 cm) and the viola bastarda type of viol (2 octaves + 4th on 72.9 cm).34 The fL products calculated for the highest strings on these instruments are respectively, 211, 209, 207 and 209 metres/sec, indicating that a good estimate of the maximum fL product acceptable in the early baroque was about 210. In the middle of the 19th century, when orchestral woodwinds were asserting their power by pushing pitch standards up to sound more brilliantly, many violinists had to live with an fL product of over 220. It was mainly pressure from the rate of breaking of violin 1sts that lowered the pitch standard, as a compromise, to a' = 440 Hz later in the 19th century. At 440 Hz, the violin 1st fL product became about 216.
The metal-strung instruments tell us about the highest fL products of some of the metals involved. The fan-shaped fretting of the bandora suggests that the string length of the top course was at its maximum when the design was developed in the 3rd quarter of the 16th century. For the iron of that time, the top course was 5 semitones lower than it could be with gut. By 1580, when the orpharion was invented, much stronger ferrous metal was available from Meuler in Nuremberg, and this was reflected in the top course being 1 semitone higher than it could be with gut. After 1600, Meuler apparently perfected his process and the top course of the theorboed lute was almost 5 semitones (and of the gittern-tuned small English cittern over 4 semitones) higher than it could have been with gut.35
There are indications that after Meuler's success in achieving dramatic increases in tensile strength of ferrous wire, the other Nuremberg wire drawers improved their processes so that the subsequent highest fL product for iron was increased by about 2 semitones, being about 3 semitones lower than that for gut. They apparently did the same for brass, resulting in a highest fL product about 6 semitones below that for gut.36
The lowest-pitch shortest-length limits - pitch instability and pitch distortion
In pitch instability, the pitch sharpens in strong playing. The frequency changes because the string length and the string tension changes while playing. When that is because of the high amplitude of vibration, the frequency change (Δf) divided by the frequency (f) equals a quarter times the ratio of the elastic (or stiffness or Young's) modulus (E) divided by the string stress (S), times the ratio of the maximum stretch of the string due to strong playing: (ΔL) divided by the vibrating string length (L). In symbols only, Δf/f = (1/4)(E/S)(ΔL/L). The maximum stretch divided by the vibrating length (ΔL/L) for a plucked string is [1/(2(r-r2)] times (d2/L2), where r is the fraction of the vibrating length that the distance of the plucking point from the bridge represents, and d is the initial displacement of the string at that plucking point. For the bowed string, r is 1/2 and d is the displacement at the mid-point of the vibrating length. The Mersenne-Taylor formula can substitute for the stress, S = 4ρf2L2. Then the pitch instability (Δf/f) equals the product of a constant (1/32), times a term of properties of the string material (E/ρ), times a term of how the string is used on the instrument [1/(f2L4)], times a term of how the string is played [d2/(r-r2)].
To modern ears at least, the maximum tolerable pitch instability is about a third of a semitone, or Δf/f = 0.02 (2%). On plucked instruments, the maximum amplitude occurs at the pluck, after which the pitch decreases as the amplitude dies away. If the ear's initial judgement of pitch is not confirmed immediately afterwards, the perception is of a twang with only an impression of pitch. This happens mostly with low-tension iron or steel stringing. On such ferrous metal strings, E/ρ is very high (about 25 Km2/sec2), but on gut strings, with E/ρ less than 5 Km2/sec2, inharmonicity becomes serious well before pitch instability or distortion does. On bowed instruments, when there is pitch instability, strings have to be fingered flat to stay in tune in very strong playing. This happens particularly on some modern cello C strings.
In pitch distortion the string stretches and sharpens because of pressing the string against the fingerboard, Δf/f = (1/2)(E/S)(ΔL/L) = (1/8)(E/ρ)(1/(f2L2))(ΔL/L). This pitch sharpening on fretting is the main reason for changing string type to one with a lower E/ρ for lower strings on metal-strung fretted instruments. If for different instruments we consider that the action is equally-well adjusted, ΔL/L will be constant. Then for a constant maximum tolerable pitch distortion, we can deduce that the fL at the bottom of the range is proportional to the square root (sqrt) of (E/ρ). This is constant for a particular metal, and since the maximum fL is roughly constant for a particular metal, the pitch range for that metal is the same for different string lengths.
An estimate of the bottom of the range for early Renaissance iron can be made from Tinctoris's statement that the cetra could be strung all in iron. The open-string range was a fifth (7 semitones). If we accept the highest fL was that for Praetorius's bandora, 5 semitones below the highest fL of gut, the lowest fL would be an octave below the highest fL of gut. Using published values of E and ρ37, the values of sqrt(E/ρ) for iron, brass or copper, silver and gold are about 5100, 3100, 2800 and 2140 m/sec respectively. Then the lowest fL for brass or copper, silver and gold would be lower than that of iron by 7.5, 10 and 15 semitones respectively.
The lowest-pitch shortest-length limits - inharmonicity
Inharmonicity is the effect that limits acceptability of the sound of low gut strings. In the inharmonicity of a uniform string, the real frequency of the harmonic called 'the nth mode' (the fundamental is the first mode), which we represent by fn, divided by the in-tune frequency of that harmonic, which is n times the fundamental frequency (f1), equals 1+B(n-1)2, where B is the 'inharmonicity constant'. In symbols only, the inharmonicity of the nth mode fn/(nf1) = 1+B(n-1)2. The constant B = (π2/32)(D/L)2(E/S), with D being the string diameter. If, as above, we substitute for the string stress using the Mersenne-Taylor formula, we find the inharmonicity constant B is equal to the product of a constant (π2/128), times a term of properties of the string material (E/ρ), times a term of how the string is used on the instrument [D2/(f2L4)].
We assume that there is a maximum inharmonicity in the sound of the lowest gut string that is tolerable in a music culture, but that may vary with the type of instrument family (especially when, as with the lute or lira da braccio, it is played together with an octave string). Maximum inharmonicity is expected on the lowest string of the member of that family with the maximum open-string range. In the Praetorius evidence, it is the viola bastarda for the viols and the large 5-string bass viola da braccio for the fiddles. To find how this maximum inharmonicity in each family affects the pitch range in other members of each family (or set) with different vibrating string lengths, we invoke the Tension-Length principle, which states that the tension of corresponding strings in members of different sizes is proportional to the string length38. This is an empirical principle that is reasonably consistent with most of the evidence of the stringing of historical and contemporary instruments.
Combining this principle with the Mersenne-Taylor formula, we get D2 proportional to 1/(f2L). For constant maximum inharmonicity, D2 is proportional to f2L4. Thus 1/(f2L) is proportional to f2L4, so f4L5 is a constant for the lowest possible string in members of the family of instruments. Then if fo and Lo are the frequency and string length of the lowest string of the family member at the limit, for a different string length L, the lowest frequency is fmin = fo(Lo/L)(5/4), and for a given lowest frequency f, the minimum string length is Lmin = Lo(fo/f)(4/5). The viol with the largest range in Praetorius's data, on which the lowest string is expected to have the maximum inharmonicity, is the viola bastarda. This instrument provides Lo = 72.9 cm, and fo = 53.7 Hz, which is the frequency of AA at a' = 430 Hz. It turns out that the same maximum inharmonicity applies well for fiddles as well. When the lowest string is supported by an octave string, as with the lute, more inharmonicity is acceptable, so a lower pitch can be tolerated. On Praetorius's lute, Lo = 61.8 cm and fo = 56.9 Hz, which is the frequency of C at a' = 383 Hz. It was called a Chor Laute, implying that it was in his preferred Chorthon, a tone below Cammerthon.
With this relationship, if there is a factor of 2 in L, it leads to a difference of 15 equal-temperament semitones of f (such a semitone has the frequency ratio of a twelfth root of 2, which is 1.0595), 3 more than with the fL product. If there is a factor of 2 in f (12 semitones or an octave), it leads to a ratio of 1.74 in L. This reflects the observation that when a member of a family had a larger open-string pitch range than the others, it was the bass, and when the ranges were the same, there was more variability in the bass sizes.
If the lowest string pitch is somewhat below the range calculated from the above relationship, it can be brought into the range by violating the Tension-Length principle and using a thinner string. For each 6% thinner, the inharmonicity constant is raised by 12%. The tension is then reduced by 12%, which reduces the amount of sound energy that string can produce. Such a weaker lowest string can be tolerated in a smaller member of a set that plays together because notes on that string rarely need to sound strongly in the ensemble. It would usually not be tolerated on a bass member of the set or on a member that plays full-range solos.
Around 1580, new lutes started to appear with the lowest string a 4th lower than before, and the viola bastarda appeared, which was a viol that used the same range expansion. The expansion appears to be associated with a kind of thick gut string (called 'catlins' or 'catlines' in English sources) that newly became generally available. With constant D, L, and inharmonicity constant B, the drop of a fourth (a decrease of a factor of 3/4 in f) can be accomplished by decreasing E/ρ by the square of the change in f (9/16), or about a half.
It has been suggested that the range expansion was due to an increase in density by the string being loaded with heavy metal particles or salts as it was twisted up39. Such a string would be completely opaque, while several sources indicated that, at least when new, it was clear or translucent in transmitted light. The only hypothesis that can reasonably explain all the evidence is that the string elastic modulus was reduced by rope construction. There is clear evidence that this kind of construction was used on thick musical instrument strings40.
As with tensile strength, the elastic modulus of a gut string depends mainly on the angle generated by twisting between the gut fibres and the string axis. To get a specific average angle, the number of twist turns is inversely proportional to the diameter. String makers usually varied the number of turns less than this, automatically making the twist angle greater with thicker strings, but keeping within the twist limit above which the string takes the shape of a corkscrew or helix. Near this limit, the string is called a 'high twist' string, and we assume that before catlins became available, the lowest string was of the high-twist type. On the strings we make, we have found that the elastic modulus of high-twist ones is about half that of low-twist ones, and that of roped ones (catlins) is about half that of high-twist ones.41 These measurements are consistent with the theory.
The following tables give the calculated string-length limits for gut strings on bowed instruments with a single lowest string and on lutes with an octave-pair lowest course. It is likely that the limits for a single lowest string would apply to plucked as well as bowed instruments, and those for an octave pair lowest course also apply to bowed as well as plucked instruments. The string lengths in the left table (with a low-twist highest string) are calculated from 21000 (210 m/sec in cm/sec) divided by the frequency. The string lengths in the right table (with a catlin lowest string) are calculated from equal inharmonicity with the lowest string of the viola bastarda for instruments with a single lowest string, and of the lute for instruments with an octave-pair lowest course. When the lowest string is of high-twist gut, its pitch limit is a fourth higher. If one wants to extrapolate beyond the range given, there is a factor of 2 in the longest string lengths in the left table for every 12 semitones, and in the shortest string lengths in the right table for every 15 semitones. The approximation of equal temperament is used in these calculations.
Let us be clear that the ranges presented here represent the judgement of Praetorius's musicians about how long a top string should last while still being musically useful, and how dull a lowest string can sound and still be musically acceptable. All that the string physics contributes is the extrapolation of individual worst cases from Praetorius's evidence to the full range of string lengths and pitches.
Table: Gut string limits of pitch and string-length from Praetorius
The Development of Western European Stringed Instruments
Chapter 2: Performance practices: early compared to modern
The evidence has been sketchy on how performance practices, including the use of instruments, varied over time and place in historical musical cultures. If we had more of such evidence, our understanding of early practices would certainly be more complex than it now can be, but that in no way invalidates the understanding we can gain from what is available. In the process of scholarship, one formulates theories as generalisations that go beyond the evidence, and how far they go in time and place is only limited by the existence of contrary evidence. One popular generalisation nowadays is to apply modern performing traditions while trying to understand and perform surviving music from previous cultures. The sparse contrary evidence is considered inconclusive since it cannot exclude the modern way as an historical possibility. This can be effective in generating attractive early music, but is not good history. In history, we consider all of the possibilities not excluded by the evidence, and then try to objectively evaluate the probabilities of the occurrence of each on the basis of evidence. The modern way feels so natural that many consider it to be neutral, without bias with respect to the performance practice of any place or time. On the contrary, this is highly biassed, and to combat that bias, it is desirable to initially ignore the modern way and explore the other possibilities, and then consider it only if there is evidence to support it.
Let us consider some aspects of early performance practices that are largely inconsistent with the modern way of doing things. One difference is the very much greater respect we now give to the composer's contribution to music than to that of the performer. Musicologists work very hard to establish the version of a piece of music that most closely represents the composer's original creation. Then this version is slavishly reproduced in performances. Musicians in pre-classical Europe also respected the composer's contribution (since the composer was very often identified), but they not only felt free to modify the music in their own ways, but they apparently were moved to do so to demonstrate individuality and professional competence. There was not the clear distinction between professional composers and performers that there is today. Composers of repute mostly earned their living as performers, and most performers of repute also had to demonstrate composition skills in their performing.
The modification of pitch by musica ficta will not be discussed here. Neither will the varying of pitch by less than a semitone, which is likely to have occurred very often in individual interpretations, but was not systematic enough to have left other than occasional evidence in instruction manuals. The modifications I will discuss are various type of embellishment. That involves colouring individual notes, usually by simple slurred pitch variation around it (gracing), replacing a sequence of notes in a melody by many more shorter ones (division), and more complex modifications. Most surviving music gives very little indication of embellishment. The theory supported here is that it was expected to be added by the competent performer to all music (including that which we now consider to be great music). Many musicologists believe in the competing theory that when a composer did not notate embellishment, it was considered unnecessary. Their view is that added embellishment in performance obscures what the composer created, and is thus undesirable. Such musicologists cannot accept the clear evidence on original tempi because its slowness also deprives music of the movement they expect it to have. If they combined original tempi with embellishment, that could provide a degree of movement that would be more acceptable to them.
The occasional evidence of composers objecting to embellished versions of their work by others has been considered to be evidence in favour of the theory that embellishment was optional. But that evidence is quite ambiguous as to whether the objection was to all embellishment or only to a particular type such as division. The identical objection can be heard today concerning the self-indulgence of musicians in modern jazz.
Almost all of the highly embellished surviving music is for lute and harpsichord, and this has been interpreted as the result of their being plucked instruments, in which the notes die away quite soon after sounding. Embellishment would then be the way to extend the effective duration of the notes. An alternative explanation is that these instruments had to play several polyphonic voices simultaneously, and though instruments responsible for only a single voice could easily improvise embellishment, doing it while also playing other parts is more difficult, and thus more likely to have needed to be planned beforehand, and thus notated. This explanation is supported by considerable evidence of embellishment in organ music, and in vocal performance, which don't have this limitation.
There is much evidence of embellishment throughout the period, but fashions in embellishment changed. The occasional evidence of avoidance of embellishment involves changes in fashion (usually complaints about excessive use of division), not the avoidance of all embellishment. Advice on the choice of embellishment survives, but none for when not to use any embellishment at all. This all implies that some sort of embellishment was considered natural in all performances.
There was a hierarchy in the ways that musicians embellished compositions. Gracing is the simplest, and particular graces could usually be learned by imitation after hearing it. Lute books occasionally showed how a newly popular grace could be fingered in tablature, but instruction manuals usually considered it unnecessary to describe them. Almost everyone could improvise graces in a performance, and the question of usage (i.e. which one, just how it was done and where it was applied) was mainly a matter of spontaneous expression and personal style within a context of good taste at the time. There is no evidence of early objection to excessive gracing.
Division was not as easy to do, since there were many manuals to teach readers how to do it. The manuals provided a catalogue of divided versions for each single long note leading to the next one or a sequence of a few such notes, usually in a cadence. The student wanting to divide such a sequence would find it in the catalogue, and replace it by a choice amongst the variety of divided versions that it offered, and thus assemble his or her own divided version. We can expect that some did no more than this cut-and-paste procedure, while others gained manual and aural experience with sequences that linked notes, and eventually could improvise their own divisions. Most divisions were slower than the fastest ones, and these slower divisions were most probably subjected to other embellishment types like gracing and time alteration.
Most of the objections to embellishment referred to excessive or distasteful division, to which no division would be preferred. In the Renaissance and the baroque, some performed harmonic division, with new harmonies inserted between the original ones (and occasionally replacing them), and this may have led to more objection than ordinary division.
The next step up in musical accomplishment was to add an independent new part to a polyphonic composition or to replace a preexisting part with a new one. In the common circumstance of the new part being higher than the others, this has been called 'descanting'. The new part could have note lengths similar to the other parts, or more notes of shorter lengths (essentially a divided part). To do this well required a good knowledge of (and/or feel for) the rules of polyphony and harmony. Some musicians had reputations of being able to do this spontaneously, but most had to work it out beforehand.
When musicians performed together in ensembles, there usually was some agreement to keep the embellishment from getting out of hand. This mainly referred to division. Of the ways of keeping order that there is evidence of, one was for the different lines to take turns. Another was for only the top line to embellish, with the proviso that only one musician embellished when more than one was playing the line, and when the first line was resting, the next line down was free to indulge. 'Heterophony' is the term for the simultaneous sounding of embellished and unembellished versions of a tune, and very early evidence for it is the probable performing style on the ancient Greek aulos when the two pipes were of equal length. There is evidence for it in the Renaissance and the baroque, and it was probably usual when there was more than one musician on a part.
The mid-14th century author Vetulus wrote that there were 54 athomi, which were the indivisible units of time, in an uncia, of which there were 480 in an hour (24 of which are in a day). Also, there were 27 particulariter vocis, which were the indivisible units of vocal time, in an uncia. Thus, there were 7.2 uncia and 3.6 particulariter vocis in a second. The fastest written note in the music at that time was a minim in minor prolation, of which there were 1.6 in a second. It is likely that Vetulus determined how small athomi were by the fastest one could play on an instrument, and that this was twice as fast as the fastest that singers could sing, which was about twice as fast as as the fastest written note.
In the early Renaissance, one occasionally finds smaller note values than one finds normally, and we presume that these were performed as fast as could be expected. In instrumental music, that was the demisemiquaver in alla semibreve time, of which there were 9.6 per second, and in vocal music, that was the fusa (quaver) in C stroke (cut) time, of which there were 4.8 per second. These speeds are 33% faster than that in the above medieval evidence.
In the baroque, Quantz wrote that competent musicians could play up to eight notes per pulse beat (which he stated was 80 beats per minute) 'with double tonguing or with bowing', which is 10.7 notes per second. This is 11% faster than the above fastest speed for early Renaissance instrumentalists. Mersenne wrote that instrumentalists 'who are esteemed to have a very fast and light hand, when they use all the speed possible for them', when playing divisions or graces (aux passages & aux fredons), could play up to 16 notes per second. That is 50% faster than what was expected of Quantz's competent musicians.
While practising up to be able to play very fast was done only when one wanted to specialise as a musician who was able to astound by fast playing, in the 19th and 20th centuries, being able to play almost as fast as Mersenne's speed specialists became a necessary aspect of music training for the profession. Heifetz was clocked at 14 notes per second playing spicatto.43
Hearing is the first sense that develops in the human embryo, and by the time we are born, we have already been powerfully influenced by the sounds heard in the womb. Dominating those sounds are the sounds of the beating heart. Those sounds involve alternating long and short beats. The steady repeating rhythm of the heart is once for every uneven pair of heartbeats, and that is the pulse of the blood. Its rhythm is between 60 and 80 beats per minute for normal people at rest. In all of the historical evidence on tempo, one note value in the most popular type of tempo corresponded with the pulse.
The earliest clear quantitative evidence of tempo was from Vetulus in the middle of the 14th century, and then the note that corresponded with the pulse was the minim in major prolation. Before then it must have been the perfect semibreve, and before then (at the beginning of mensural notation) the breve. By late in the 15th century, it was the minim in minor prolation, and in the 16th and 17th centuries it was the crotchet in the duple cut alla semibreva time (or the minim in alla breve time). In the 17th century, it was common to distinguish between a slow tempo at the bottom of the pulse range and a faster tempo at the top of the pulse range. Later in the baroque, reliance on time signatures to specify tempo deteriorated in favour of Italian time and expression words, so tempo indications became rather vaguer. Without such Italian words, a default standard tempo still pertained. In 1756, the physicist-musician Tan'sur indicated that the usual duration of a crotchet was a second (60 per minute).45 By the 19th century, Beethoven complained that 'we can hardly have any tempi ordinari any more, now we must follow our free inspiration'.
These early tempi are considered much too slow by musicologists, though none has been able to fault the analysis of the evidence, nor to offer an alternative analysis that fully respects the evidence.
General tempo standards were abandoned in music of quality, but they have persisted in fragmented form in many aspects of music, subject to modification by cultural changes. It has recently been reported that in modern pop songs, it was at 120 beats per minute in the 1980s, but it has since increased to 130 today, largely as a result of the use of Ecstasy in the clubs.46
The observed sequence of tempo augmentations through the centuries has defied a simple convincing explanation. It is possible that it could have been driven by new schools of musical innovators who made increased use of the shortest note values used at the time. When their compositions became popular, musicians would want to improvise divided versions, but this was inhibited by the short note values in the originals. The solution was to augment the tempo, leaving room for the divisions (the tunes, being already familiar, would be easily recognised performed at slower speed). Eventually, a new shortest note would begin to be used, and this cycle could be repeated.
Alteration (from that indicated by the music) of the time when melodic notes start and how long they last is mainly confined to rubato in modern performance practices. In modern rubato, the tempo is smoothly tugged slower and faster. There is no evidence for such smooth tempo changes in pre-classical music except for rallentandos at cadences. Stepped tempo changes were notated by changes in time signature throughout the period, and changes in smaller steps became common in the baroque. Expressing these steps was the initial use for Italian tempo/expression words. Much more frequent in our period was the old meaning of the term 'rubato', which was to keep the basic measure (of several pulse beats) steady while otherwise varying when the notes started and how long they lasted. There was much greater freedom to vary the time of notes than modern musicians are comfortable with. Dotted rhythms are examples of such variations in a repeated simple pattern.
Because of this freedom of shifting the notes in between the beats of the basic measure, previous planning would be needed to avoid clashes when there was more than one musician to a part. Amateur musicians rarely had group rehearsals, so they rarely performed with more than one to a part, but professional ensemble musicians had to rehearse, largely for this reason. Comfortably fitting the words of subsequent verses of songs (when the setting is just for the first) was eased by such time variations plus readily adding or subtracting notes. While exploring original fingering indications in cittern music, I found very large changes in position after the shortest of notes, which suggests that time was probably taken from an adjacent note to provide the time needed.
When more than one note was written to sound simultaneously, this was often not the case when played. Arpeggiation was common when playing such notes on the lute and other single instruments. It could start from the top or the bottom or the middle (in either direction) or a mixture of these, so it is unlikely that simultaneity when different instruments were involved would have been as necessary accurate as it is nowadays.
Note production on viols and voices
A major aspect of developing modern viol technique is to suppress the grating transient noise that is made between when the bow first contacts the string and when the string sorts out its stable vibration modes. This is done by starting the bow stroke softly and then swelling the sound as soon as the stable tone has developed.
This type of note production is clearly not what one would expect from the word 'strike' in the phrase 'Strike the viol' in Purcell's famous Come ye Sons of Art ode to St. Cecelia. Similar evidence comes from French sources. Mersenne wrote that viols 'have a percussive and resonant sound like the spinet'.47 English viol playing was the most respected in Europe at that time. Though Mersenne wrote about various differences between the French and English in how they used their viols, he would certainly have mentioned a difference in note production if there was any.48
About a century after Mersenne's comment, Le Blanc similarly wrote that viol 'bow strokes are simple, with the bow striking the string as the jacks pluck the harpsichord strings, and not complex like those of the Italians, where the bow, by the use of well-connected up- and down-bows whose changes are imperceptible, produces endless chains of notes that appear as a continuous flow such as those emanating from the throats of Cossoni and Faustina.'49 He also wrote 'Using a smartly-drawn and plain bow stroke which resembles so much the plucking of the lute and guitar, the kind of sound that le Pere Marias had in mind for his pieces, he varied it into six different types of bow strokes.'
That plain basic bow stroke was called the coup de poignet, meaning 'blow of the wrist'. It was described by Loulié, who indicated that at the beginning of the stroke, the wrist was bent to lead the hand in the direction of the stroke, with the middle finger of the bowing hand pressed heavily against the hair50 'as though you want to grate or scratch the string', and as soon as the string began to sound, the excess tension on the hair is released, and at the same time the wrist moves to lean in the other direction.51 Loulié mentioned that some variants on this basic stroke had only a beginning and not a middle nor end. The soutenu had the middle and end the same as the beginning. The enflé had a minimal beginning and then swelled. When Marais wanted this stroke to be used, he notated it by putting a letter 'e' over it or soon after it. It was a special effect, not the usual way to produce a note, as modern viol players assume.
Mersenne wrote that the viol ‘imitates the voice in all its modulations’.52 This most probably included how notes were produced. In vocal technique we would then expect the sound would be strongest on the first consonant with a fall-off of intensity as the syllable progressed. This was probably a major reason why the syllable ‘ut’ was dropped from fasola singing in England (probably before the middle of the 16th century), as it was the syllable that did not start with a consonant. Voices and viols were quite interchangeable in England around 1600. Untexted part music was apparently deliberately ambiguous as to whether it was performed on viols, sung fasola with voices, or mixed. About half of the published books of the English madrigalist school indicated on the title page that they are apt for voices or viols. So the stylistic equivalence between the sound of the viol and voice is well supported in England as well as France.
Phrasing and style
There has been no attempt to study the history of phrasing. There is no doubt that in all music, there has always been concern for the dynamic shapes of units of all time spans, from the individual note to a whole programme of music. Yet most attention is given to a particular unit which one shapes most carefully. In the modern cantabile style of music performance, that is the 'musical phrase', which usually corresponds with a line of text or what can be performed in one breath. This concept of phrasing appears to be quite modern. Well on in the 19th century, C. Engel wrote: 'A phrase extends over about two bars, and usually contains two or more motives, but sometimes only one'.53 A motive is equivalent to what was called a ‘point' in the baroque. The verbal phrase, which is the basic unit for expressing ideas, usually corresponds to one or two points.
It seems that in the Renaissance, the French baroque and the early Italian baroque, the musical phrase was the same as the verbal phrase. In the late baroque, Quantz, when comparing French with Italian style, wrote that 'The French manner of singing [has] ... a spoken rather than a singing quality. They require facility of the tongue, for pronouncing the words, more than dexterity of the throat'. The French style was declamatory. Many Renaissance and baroque writers compared the performance of music to the oratory of public speakers. They sometimes suggested writing the words of pieces of vocal origin into the music of instrumental versions so that the instrumentalist could phrase it properly.
In this declamatory rhetorical style, emotion was expressed by the meaning and imagery of the words, combined with a dramatic delivery. It is rarely heard nowadays (chances are greatest from the pulpit) because it seems ludicrously exaggerated, pompous and unnatural. Yet that is the authentic way to perform Shakespeare. Modern theatrical interpretations wisely neither attempt nor claim authenticity in performing style. This style was developed for orators to be able to sway crowds in situations of poor acoustics and still be understood. Pronunciation had to be very clear, so consonants had to be emphasised, as well as important words. The articulation obviously had no time spaces between syllables of a word, a small space between words, a bigger space between verbal phrases, and a bigger space yet between sentences. Delivery was considerably slower than in conversational speech.
This applied in musical performance except that gracing or short divisions were often substituted for greater sound volume as a method for providing emphasis for important syllables or words. Important words occur in most verbal phrases, so the dominant phrasing involved the shaping of verbal phrases, with peaks at the important words. These peaks interfere with the smooth shaping of modern phrasing, and this is one reason why gracing is usually avoided (when fidelity to the composer allows) in modern performances. Others are that the improvisatory skills involved are discouraged, and that the very much faster tempos taken make it difficult to perform and to listen to such embellishment. Extended division appears to have been a musical artifice that departed from trying to convince by declamation, and was a display of invention while contributing momentum to the performance.
In the contrasting later baroque Italian style, the vocalisation was supposed to express emotion more directly, stringing together ornate versions of the sounds associated with emotion such as sighs, sobs, cries, gasps, groans and chokes, with the words being quite subsidiary. Public display of emotion (without artifice that shows that it is under control) was still not socially acceptable (as it is in the visual media today), but the vocal agility demonstrated in the decorated music provided enough artifice to avoid embarrassment. Its use in Monteverdi's Lamento d'Arianna became very popular, and it became an increasingly standard feature in Italian opera (arias of this type were interchangeable in different operas to some extent). By late in the baroque, this shift of emotional expression from full words (with strong consonants) to vowels led to the common use of a standard swelling type of note production called messa di voce for both voices and instruments. Modern vocal and instrumental style is strongly influenced by this Italian baroque tradition, with the emphasis still on the vowels, but shifted from vocal agility and expressive variety to pitch accuracy and a beautiful powerful tone. The long modern musical phrase also seems to hark back to this tradition.
Standards of precision
In my youth, in the middle of the 20th century, many opera singers often slid up to the written note without necessarily reaching it. The intended note was obvious from the musical context, and the pitch tension created by this practice added to the enjoyment of performances by many listeners. Around that time, the critics were engaged in a concerted campaign to raise 'standards', apparently mainly concerning precision in pitch, precision of playing together in ensembles and clean accurate playing in fast passages. Record companies, attempting to produce the 'best' recordings of popular works, had recording engineers eliminate these blemishes in the cutting room, and since some musicians and ensembles could produce such technically perfect performances most of the time, the critics insisted that this should be expected in every public performance.
The musical conservatories system rose to the challenge, and by late in the century, the critics didn't have reason to make this complaint any more. All the conservatories apparently did was to give greater priority to the skills in precision and clean fast technique in choosing applicants and in training them, at the expense of musical flexibility and inventiveness. The early music movement grew mostly in this period Aspiring professional early music performers sought acceptance as a branch of 'serious' music, rather than going the more informal way of folk music or jazz. The musicologist critics encouraged them to go in this direction to provide a showcase for the results of their research. Also, respectability and income promised to be higher. This required conforming to the standards in the field, thus largely leaving skills in improvisation and invention undervalued and undeveloped. The 'best' early musicians succeeded in meeting those standards.
Initial attempts at playing and making early instruments were mostly by amateurs. Some sought out the historical evidence and explored how to interpret it. Being more authentic than had been attempted before conferred status. Others felt that they understood the spirit of the original practitioners and they invented their personal versions of it. When the professional performers achieved commercial success, the amateurs were expected to rally around these heroes, and the field had to be stabilised. Both of the above groups of creative amateurs largely evaporated. The remaining amateur players were those who were happy to be pale imitations of the professionals. The makers who gave the professional players what they wanted flourished and became professional makers. The professional players also convinced professional makers of modern instruments to service their instrument needs as well. Many of the conservatories hired the professional musicians to teach early music performance, and making schools hired the professional early-instrument makers to teach their craft.
When the field became professionalised, the players had a playing technique and style and instrument designs that served their purposes well and were acceptable to their customers and to the musicologists and critics. They invested much time in practising to meet standards, and invested much money into appropriate instruments, so it is understandable that they would not welcome any subsequent research that would suggest that a different performing style or technique or a different instrument is more historically accurate. If following any new research results would make what they did obviously more attractive, they would of course seriously consider the further investment. Otherwise, they would only take such research seriously if their respected musicologist mentors did as well. This didn’t happen because the musicologists were equally concerned about rocking the boat (everyone was apprehensive about whether or when the early-music ‘bubble’ would burst). The musicologists were happy that modern early music style is different enough from modern standard style to convince most that an effort has been made to be historically accurate, and this was accepted by the listeners. Keeping that style, and expanding repertoire would be a much more useful contribution to modern musical life than properly exploring historical performing styles. Historical research that expands repertoire and provides material for programme notes and record sleeves has been all that is welcome.
Early in the early music movement, there was an attempt to be authentic by using singers with good voices but not trained in the modern style of vocal production. This situation was not stable. Some critics steeped in the modern tradition considered that the early music singers sounded amateurish. Singers with modern training wanted to include early music in their performance options, and opera managers saw a promising expansion into early operas. A compromise was reached to keep almost everyone involved happy. Early opera performances employed loads of early-music instrumentalists in their orchestras, while voices trained in modern vocal production became acceptable as long as vibrato was noticeably reduced. Such singing now completely dominates, and the original singing in the ‘naive’ style no more has a place in professional performance. Singers in that early style have had to acquire modern training to remain acceptable.
In the history of pre-classical music, there is occasional evidence of ensembles which had the reputation of playing with exceptional precision, and performers with the reputation of playing exceptionally fast. But there is no evidence indicating that these were considered to be practices that were generally aspired to. Most of us will admire a juggler or acrobat for his or her skilled accomplishments, but are not willing to invest the effort needed to try to do it ourselves, because these skills are not necessary for doing what we want to do. We can safely claim that standards of what was necessary for acceptability in technical perfection then were considerably lower then than they are now. Some now will say that improved quality can't be objected to. On the contrary, one can object if the criteria for quality suppress spontaneous improvisation and ‘out of tune’ pitches, a very important original avenue of early musical expression, and modern standards do that.
Sensitivity to technical perfection varies in the population, and there has always been a minority that have had particularly sensitive ears for tuning and ensemble precision. In the last century, these people have become dominant in all aspects of music training and in the music industry, and they dictate the standards for professional involvement. Previously, when the arbiters of public taste were members of the affluent classes, public taste was satisfied with rather more relaxed standards.
Modern early music seems to be about what the music should have sounded like according to modern performing traditions and expectations. Since the primary responsibility of any performer has always been to provide that which the consumers appreciate, what is being offered is fully justified by its commercial success. A problem early in the movement was that the performers led the audiences to expect historical accuracy in what they heard. The compromises with modern practices that made performing practical and the performances enjoyable were hidden, and indeed the audiences were happier not to have been told about them. More recently, the dubious morality of this practice has largely been eliminated by the performers claiming only that what they offer is ‘historically informed’. With reduced dependence on authenticity and fidelity to the the composer’s intention as seen from the narrow viewpoint of music historians, performers now have some more freedom.
Exploration of original performance practices essentially stopped when the early music movement became professionalised. Most performers since are convinced that the pioneers had worked out all of the historical problems, and they don’t worry about it. I strongly suspect that further exploration of historical practices would uncover some other aspects of the music that would be pleasing to modern ears. Such exploratory interpretations of the music would usually not lead to runaway commercial success, but I am sure that there are many open-minded cultural-tourist listeners who will get much out of serious minimum-compromises attempts to recreate the sounds of the music their ancestors enjoyed. There is a 'slow food' movement to better appreciate the preparation and eating of food, so why not 'slow music' (with verbal phrasing)? The people who can do it would have to be able to explore the implications of the historical evidence while being skeptical about their modern aesthetic judgements. They should exist amongst music historians, but indeed are very hard to find.
1 J. A. Westrop, ‘Practical Musicology’, Music Libraries and Instruments (Hinrichsen, 1961), p.25.
2 For a long time, I have been calling this ‘string stop’. Since others have not appreciated the advantage of this terminology, in this book, I am reverting to the term more universally used but less precise ‘string length’, meaning ‘vibrating open-string length’.
3 Two mistakes pointed out here are: - assuming the equivalence of the lirone and the lira da gamba (by all historians who mention both names, going back at least to Hayes in 1930), and assuming that baroque fiddle tunings and sizes applied to 16th century viole da braccio (by Boyden in 1965).
4 H. Meyers, private communication
5 E. Segerman, 'A re-examination of the evidence on absolute tempo before 1700 - I and II', Early Music XXIV/2 (May 1996), pp. 227-48 and Early Music XXIV/4 (Nov. 1996), pp. 681-9.
6 e.g. M. Tiella, 'On musical iconography', FoMRHI Quarterly 90 (Jan. 1998), Comm. 1551, pp. 14-7.
7 A recent study on finger stretch involving 50 people by Eric Franklin (The Lute No. 78 2006, pp. 19-20) led to a mean value of 11.3 cm, with a standard deviation of 1.5 cm.
8 A. Holborne, The Cittharn Schoole (London, 1597), 'Bonny Sweet Robin'
9 E. Segerman, 'Review: "Problems of Authenticity of 16th Century Stringed Instruments", by K. Moens, CIMCIM Newsletter XIV (1989), pp. 41-9', FoMRHI Quarterly 98 (Jan. 2000), pp. 19-25.
10 E. Segerman, 'Praetorius's Cammerthon Pitch Standard', Galpin Soc. J. L (1997), pp. 81-108.
11 M. Praetorius, Syntagma Musicum II (De Organographia) (Wolffenbüttel 1619 & 1620)
12 M. Praetorius, ibid pp. 231-2, translated in S. Heavens, 'Praetorius's pitchpipe Pfeifflin zur Chormass', FoMRHI Quarterly 78 (Jan. 1995), Comm. 1328, p. 60.
13 Clear evidence that these were the same as Cammerthon is given in S. Heavens & E. Segerman, 'Praetorius's Brass Instruments and Cammerthon', FoMRHI Quarterly 78 (Jan. 1995), Comm. 1327, pp. 56-7.
14 A. J. Ellis, 'The History of Musical Pitch', Journal of the Society of Arts XXVII (Mar. & Apr. 1880); and XXIX (Jan. 1881), pp. 109-12.
15 A. J. Hipkins, Encyclopaedia Britannica, 11th ed., xxi, p. 660.
16 N. Bessaraboff, Ancient European Musical Instruments (Harvard Univ. Press, Boston, 1941), p. 378.
17 E. Segerman, 'A Survey of Pitch Standards before the Nineteenth Century', Galpin Soc. J. LIV (2001), pp. 200-18.
18 P. G. Bunjes, The Praetorius Organ (Concordia, St Louis, 1966), Chap. XIV, pp. 772-866.
19 K. Bormann, Die gotische Orgel zu Halberstadt (Merseburger, Berlin, 1966)
20 W. R. Thomas & J. J. K. Rhodes, 'Schlick, Praetorius and the History of Organ Pitch', Organ Yearbook II (1971), pp. 58-76.
21 F. Ingerslev & W. Frobenius, 'Some Measurements of the End-Corrections and Acoustical Spectra of Cylindrical Open Flue Pipes', Transactions of the Danish Academy of Technical Sciences I (Copenhagen 1947), pp. 7-44; see review and summary by E. Segerman, FoMRHI Quarterly 99 (Apr. 2000), pp. 9-12.
22 D. Gwynn, 'Organ Pitch, Part 1 - Praetorius', FoMRHI Quarterly 23 (Apr. 1981), pp. 72-7.
23 M. Praetorius, ibid p. 103.
24 M. Praetorius, ibid p. 15.
25 J. Koster, 'Praetorius's Pfeifflin zur Chormass', presented at the Conference 'Pitch and Transposition, 16th-18th Century' organised by Internationale Musikprojekte, Hochschule für Künste, Bremen (October 1999).
26 E. Segerman, 'Spreadsheet I & F calculation of organ pipe pitch', FoMRHI Quarterly 107-8 (Apr-July 2002), Comm. 1800, pp. 7-8.
27 The Compenius organ at Frederiksborg castle has a wind pressure of 55 mm water column according to the 'Compenius' entry by H. Klotz in The New Grove Dictionary of Musical Instruments I (Macmillan, 1984), p. 449.
28 A. Baines, Woodwind Instruments and their History (1957, revised 1962), p. 242.
29 S. Heavens & E. Segerman, 'Praetorius's brass instruments and Cammerthon', FoMRHI Quarterly 78 (Jan. 1995), Comm. 1327, pp. 54-9.
30 E. Segerman, 'Praetorius's and surviving Nuremberg sackbut lengths and playing pitches', FoMRHI Quarterly 80 (July 1995), Comm. 1371, pp. 34-6.
31 P. F. Tosi, Observations on the Florid Song, trans. by Mr. Galliard (London 1743), pp. 29-33; discussed in E. Segerman, 'The appoggiatura, early vocal style and instrumental imitations', FoMRHI Quarterly 103 (Apr. 2001), Comm. 1756, p. 27.
32 Michael Praetorius, Syntagma Musicum II (Wolfenbüttel 1618-20).
33 E. Segerman, 'Praetorius's Cammerthon Pitch Standard', Galpin Society Journal L (1997), pp. 81-108.
34 E. Segerman, 'Further on the pitch ranges of gut strings', FoMRHI Quarterly 96 (July, 1999), Comm.1657, p. 58.
35 E. Segerman, 'Praetorius's plucked instruments and their strings', FoMRHI Q 92 (July 1998), Comm.1593, pp. 33-7.
36 E. Segerman, 'Praetorius's plucked instruments... op cit.
37 E. Segerman, 'Some theory on pitch instability, inharmonicity and lowest pitch limits'. FoMRHI Q 104 (July 2001), Comm. 1766, pp. 28-9.
38 E. Segerman, 'Strings through the ages I: the history of strings and their construction', The Strad Vol 99 No 1173 (Jan. 1988), pp. 52-5.
39 M. Peruffo, 'New Hypothesis on the Construction of Bass Strings for Lutes and other Gut-Strung Instruments' FoMRHI Quarterly 62 (1991), Comm.1021, pp. 22-36.
40 A. Ramielli, Le Artifiose Macchine ... (Paris 1588); translated by M. T. Gnudi as The Various and Ingenious Machines of Agnostino Ramielli (Dover, 1994); the relevant quote concerns a component of a trebuchet that is 'a thick double rope made in the same way as the thick strings of large bowed instruments'.
41 E. Segerman, 'Measuring the elastic modulus of gut', FoMRHI Q 105 (Oct. 2001), Comm. 1775, p. 10.
42 E. Segerman, ‘A re-examination of the evidence on absolute tempo before 1700 - II’, Early Music XXIV/4 (Nov. 1996), pp. 681-9.
43 F. A. Saunders, Benchmark Papers in Acoustics (1946).
44 E. Segerman, ‘A re-examination of the evidence on absolute tempo before 1700 - II’... op cit.
45W. Tan'sur, New musical grammar (1756), third edition
46 M. Stock, ‘On song’, New Scientist Vol 180 No 2423 (29 Nov. 2003), p. 48.
47 M. Mersenne, Harmonie Universelle III, The Books on Instruments (Paris 1636), Third Book, Prop.1.
48 ibid, Fourth Book Prop. VII.
49 H. De Blanc, Defénse de la basse de viole contre les entreprises du violon et les prétensions du violoncel (Amsterdam 1740), translation in J. Hsu, 'The use of the bow in French solo viol playing of the 17th and 18th centuries', Early Music 6/4 (1978), pp. 526-9.
50 Robinson's bow hold, where the middle finger is on the frog, is more likely to have been used by the English in playing their popular early 17th century repertoire.
51 E. Loulié, Methode pour apprendre à jouer la viole (Bibl. Nat. Paris, MS fonds fr. n.a. 6355, fol. 210-220).
52 M. Mersenne ibid Fourth Book Prop. V.
53 C. Engel, Introduction to the Study of National Music III (1886), p. 82, cited in O. E. D.