Cover art for From A to Screen by Johanna Drucker

From A to Screen

Jan. 1, 20141 viewer

From A to Screen Lyrics

From A to Screen

Johanna Drucker
 

HOW DO LETTERS appear on our screens, these exquisite expressions of design, our Baskerville so clearly differentiated from the Caslon and Comic Sans that we recognize instantly what font families we are inviting into view? Do they come, like pasta letters in a can of alphabet soup, intact and already formed, down the pipeline of network connections, so many obedient foot soldiers in the ranks of our textual forces? Are they conjured to their tasks through the sorcery of application apprentices calling the infinite stream of glyphic figures into service for maneuvers as written rows and ranks? Arguably among the most nuanced and thus demanding figures in our graphical universe, letters and fonts rely on an infrastructure that largely disappears in its daily operation, even as their existence guarantees the performance of the alphanumeric codes that underpin our encoded communications. We take the virtual letters as things, mistaking their appearance for substance, and we also overlook the agency of alphanumeric code, taking it for granted. In each case, and across an array of other activities sustained by writing in our analogue and digital worlds, letters function as much on account of how we conceive of them as on the basis of any autonomous existence based in what they are. Indeed, our concepts of what the letters are, as well as their literal forms, have migrated from scratched stone and inked surface to screen, and in their current iteration, they reveal much about assumptions on which other functional illusions are based.

For a profound paradox governs the conception of alphabetic letters and their functional identity in the digital environment: they are at once understood as atomistic elements, discretely defined and operationally distinct, and they are also understood as expressions of complex, distributed contingencies whose identities are produced across an array of ephemerally connected conditions. Code, that is, functional discrete-ness, should not be confused with appearance, the graphical particulars of individual forms. They are different orders of things in different conceptual systems. After all, the signs in alphanumeric code could be replaced. Another set of symbols, equally arbitrary and distinguishable one from another, could be substituted. The circumstances that have led us to use these familiar entities from historical reasons, our a’s, b’s, and 0’s to 9’s, are now less important than the dependence built into and around their use in many multiple trillions of lines of code. Few elements of our digitally literate lives are more integral to its operations. Alphanumeric notation underpins the processing of code, it is code, as well as the stuff of higher-order expressions in languages of all kinds.

But the code–contingency paradox has implications for the ways we use letters, taking them for granted as a stable system in the first instance and recognizing their complex contingency in networks linking design, storage, delivery, display, and output in the second. The roots of the distinction are deep, linked to philosophical debates about stability and change that apply to particles and waves at a higher level of physics and to discussions of autonomy and codependence in systems theories and ecologies. The way we “think” the letters—how we conceive of them—shapes their design and our understanding as much as the technologies through which we make them.

Conceptions of letters function as expressions of beliefs about form and identity, providing the basis on which functional activity of alphabetic code operates. If I could activate the a on my keyboard only through a process of distinguishing it from every other letter in the system of all letters in all fonts and styles, my typing speed would slow to geologic (or philosophical) time rates. But the fact that I can single out a letter of the alphabet is built on an assumption that it is an unambiguous, discrete entity. However, the unique reaction triggered when a keystroke communicates with an application in my system tells me nothing about the identity of letters in an ontological sense. Atomistic theories of letters have led to medieval theories of the kabbalah, the manipulation of the many names of god, the attributes and elements of the cosmos, Lullian methods of ars sciendi, and cosmological interpretations of male and female properties in procreative metaphors projected onto their forms (Drucker 1994). But this combinatoric mojo has no relation to the digital processes and platforms that are too often and so easily considered the extension of a world of spinning volvelles and diagrammatic charts (Gardner 1982). The technological distance between merkebah meditations and Turing machines cannot be measured on a single scale any more than the fact that the GATC sequences used to represent four nucleic proteins that make up genetic code can be read as if they were actually alphabetic. Such approaches mistake accident for substance, incidental information for essential properties. Probing the relation of atomistic concepts of alphabetic letters quickly ensnares the technological and intellectual discussions with mystical resonances. That said, the graphical primitives that make up letterforms lend themselves to atomistic analysis. I/O elements, straight–curve oppositions, quickly get absorbed into mystical analysis of digital notation for its own sake, just as they have served occultists of various kinds over the centuries (a good example is Tory 1529) (see Figure 4.1).



FIGURE 4.1. Geoffroy Tory, Champfleury. 1529.



The graphical appearance of letters, their style and shape, which seems self-evident for other reasons, also has a history of productive tensions in relation to techniques and materials (Kinross 1992). The production of a “letter” in a digital environment is an effect of multiple, distributed processes, each of which participates in its production as surely as constraints of varied material substrates have in other circumstances. Thus the second part of the dualism in our paradox: that specific forms (be they letters, human shapes, or natural phenomena) assume their identity only as momentary configurations of energy in flux. This has connections to a systems-based ecology of signs, a view in which human apperception of the fleeting phenomena is always partial, situated, ephemeral. For letters to be grasped fully, in all their philo-semio-sophical actuality, a Peircian tripartite description of signs would need to be pressed into service (Colapietro and Olshewsky 1996). In such a description, the very formation process of signs is considered in three parts, beginning with the undifferentiated world, through resistance to difference, and into a system of relations. Such a view encompasses all signs, human and natural, in which configured meaning is understood as an effect of a larger semiotic ecology (Taborsky 1998; Frost and Coole 2010).1 Though essential to a theory of meta-informatics and signs, such frameworks are not explicitly related to an analysis of the migration of alphabetic letters across technological platforms—except insofar as the very notion of “letter” in that discussion depends on these philosophical debates and explanations. I don’t aspire to such grand ambitions, except by implication. The level of abstraction to which such arguments rise, though intoxicating, won’t explain the shape of the letters on the screen or their connection to their rich history of prior conceptions. So let’s return to the letters, the inventory of their conception, and ways such notions shift in the transubstantiation the alphabet undergoes in the course of historical change and technological variation.

What are the letters? Did our ancestors ask this, forming their proto-Canaanite glyphs in the early part of the second millennium B.C., as the alphabet emerged in a cultural exchange between cuneiform scripts and hieroglyphic signs (Driver 1976; Diringer 1943; Albright 1966)?2 If they did, they left no trace of these ruminations. The earliest recorded reflections on the letters, their origins and their identity, come from the Greeks almost a thousand years later. Their adaptation of the alphabet was, these Hellenes knew full well, a borrowing from the peoples to the East. Cadmus, the Phoenician, was identified by Herodotus as the bearer of the letters, those signs that had come into Greece from Asia Minor by a route in the north and also by sea along the Mediterranean trade routes in various waves of diffusion and exchange some time in the ninth or eighth century B.C. (Bernal 1990).

But in addition to mythic (albeit accurate) histories of writing, the Greeks were aware of the differences among sign types and systems. Plato had been to Egypt, and his investigation of self-evident signs and the myth of mimesis, The Cratylus, bears within it a mistaken conviction that the hieroglyphics he had seen on monuments abroad had a capacity to communicate directly to the eye. The contrast between his own abstract script and the pictorial signs of the Egyptians gave him ground on which to consider the effects and implications of this difference. But the “Greek” alphabet, like many of the variant scripts that spring into being on the Eastern edge of the Mediterranean, is in fact a host of different strains, spores of experiment that take hold and mutate, adopt and adapt to new soil and new tasks (Naveh 1982; also Petrie, 1912; Cottrell, 1971; Albright, 1966).

Wait, you say, suddenly pausing for a moment in confusion. Who invented the alphabet? Which alphabet? The Greek alphabet? Each of these three questions leads to an abundance of wrong answers. The idea that the Greeks invented the alphabet is based on a particular conception—that the alphabet is a specific kind of transcription of a particular spoken language (Havelock 1976). But this is only one among several concepts of the alphabet, and the Greeks, in fact, were late in the game, receiving their packet of potential notation as a hand-me-down transmission after the initial formation of an alphabetic script on which the sequence, names, and “powers” of their letters are based (Logan 1986). The Semitic language speakers forged an alphabet to serve a tongue whose consonantal morphemes communicated adequately without vowels, and the technical specifications for their writing were different than for the Greeks, who later modified the writing for their own use. It was the Semites who fixed the sequence of signs (so securely that the assembling of architectural elements followed its order, a code of instructions for putting pieces of built form together in the right order in the turquoise-mining region of the Sinai). They gave the letters their names—aleph, beth, gimel, and so on—and their original graphic identities more than a millennium before the Greeks took them up like dragon’s teeth to sow their own ground and bring forth generations of poetic texts and language.
The alphabet was not invented but emerged, and all known alphabets spring from the same common root, which tracks to the lands of Canaan, Accad, Moab, Byblos, Sinai, and other realms whose names haunt the biblical history of a region of the Middle East that stretched as a fertile crescent from Mesopotamia to northern Africa (Sanders 2009). Its origins are intertwined with the histories of nations and peoples whose inscriptions provide a piecemeal record of the first appearances of a system of signs that was neither cuneiform, nor hieroglyphic, nor syllabic, logographic, or ideographic, but alphabetic (letters used to represent phonemes) in the centuries beginning around 1700 B.C. at the earliest. That original alphabet has been seen through many interpretative lenses, as archaeological evidence expanded and its techniques of analysis replaced biblical histories and origin myths linked to Adam, Abraham, and the angels. But the archaeologists’ conception is also just that, one of many, rooted in empirical study of objects and evidence correlated with information about ancient languages and narratives of history retrospectively projected onto shards and fragments of remains. Even among these scholars of antiquity, passionate debates rooted in differences of opinion about the symbolic role of the alphabet within the formation of national identity versus its mere advantages of functionality continue to appear (Sanders 2009). Empirical evidence, in this case, builds on the assumptions to which it is put rather than serving to dispel the foundations of belief. Tracking the connections between fragments of ancient inscriptions and the morphing of forms into all the many variants of Arabic, Ethiopic, South Indian, Indonesian, Glagolitic, Slavonic, Germanic, Runic, and Roman forms creates a genealogical model of cultural diffusion (Senner 1989). This can in turn be countered by a more diffuse approach to the early emergence of this, the most persistent and long-lived writing system.

But the question that guides our investigation into the paradox of letters on the screen, the tension between the atomistic and systemic identities of these forms, remains much as it has been for centuries—not simply “What is the alphabet?” but “How does the alphabet function according to the ways we may conceive it?” Just as our ancestors made use of the fixed sequence of letters, a model of order and ordering, so we take advantage of a concept of atomistic discreteness. Thus the answers have implications for our current condition, for the unacknowledged hegemonic hold of the Western alphabet on computational processing, on its infiltration into the very structure of the networked world in which its ASCII, Unicode, and Bin-Hex systems operate. As stated before, letterforms (graphical expressions) and alphanumeric code (discrete elements of a system) should not be confused. But the two use a common set of alphabetic signs that have been in continuous use from the sands of the ancient Middle East to the digital present, over a span of almost four thousand years. And problems of technology, concealed in the apparently mechanistic issue of design, turn out to be a Trojan horse in which the problems of philosophy make their stealth entry into our guarded precincts. The migration of the letters is not a history of things across spaces, time, and material substrates but of models and concepts reformulated. One strain of this reimagination springs from the encounter with materials.

The development of contemporary modes of production, that is, the design problems of technological migration, could be studied by looking at the career of an individual designer. One example is justly renowned Matthew Carter, who began with lessons in stone carving from his father, accompanied by exercises in calligraphy, before becoming involved in each successive wave of design production, from hot type (lead) to cold (photographic), and digital, from earliest pixel wrangling to programmable fonts whose variants are produced through an algorithm generating random variations (Re 2003). When he worked on designs of phototype, his training stood him in good stead, his eye attuned to the difficulties of scale and the need to compensate for the ways in which light spreads as it goes through a negative and onto a photosensitive surface, clotting the finer elements of letters at the points where strokes connected or where serifs met strokes. To correct these visual awkwardnesses, for the fattening of the ankles or swelling of the bellies or filled-in counters, the designer has to chisel away bits of the letter at a microscale so that the projected light coming through the film strip negative onto the photosensitive paper would not result in clumsy forms. Retrospectively, phototype seems physical, direct, only moderately mediated by contrast to digital fonts, whose identity as elements is stored, kept latent, waiting for a call by the machine to the file that then appears on screen or passes into a processing unit on a device. Carter’s early work in digital type was equally nuanced, and as memory and processing capacities escalated exponentially, so did the ability to refine digital typography.

But what to draw? Consider the complexity of the problem. Carving curved letters in stone is difficult, but inscription is direct. Description—the indication of a set of instructions for drawing produced by a design—remediates the direct process. Bitmapped fonts could be hand-tuned so that pixel locations could be defined for every point size. The image of a needlework grid at finer and finer scales makes a vivid analogy. Outline fonts had to be stored as a set of instructions for rendering the outline of the glyph, with the instructions for filling in provided by the printer. The decision to create a number of system fonts that shipped with desktop computers meant that Apple, at least, put its energy into making particular letterforms work throughout its file storage, screen, and output units (Ploudre 2011). A distinction between screen fonts and printer fonts enabled printer-specific files to communicate ways of drawing and making letters. As laser printers came into use, graphical expressions became more refined, and the demands on systems of processing and memory increased. The distributed aspects of a font’s existence had to be considered across an array of operations. Challenges of graphic design led to encounters with questions of ontology and identity.

From the outset, digital fonts had to be created across a life cycle of design, storage, processing, display, and output in environments that each had different requirements and protocols. The earliest output devices were plotter pens and dot matrix printers, primitive graphical means at best. They created crude versions of letters whose patterns of dark spots in a grid bore little resemblance to font designs. The pixilated letters on early blinking amber and green screens had the grace of punch card patterns, as refined as cross-stitch and only as resolved as the scale of the grid. But what produced these letters? What was being stored where, and how? Digital fonts explode any illusion on which stable, fixed, atomistic autonomy could be based. Even at the level of their graphic identity, letters have only to be able to be distinguished from each other, not hold their own as pictorial shapes.

Designs for typography on desktop computers began to take shape in the mid-1980s, before fonts had any connection to networking activities. The basic problem of how to create the illusion of curves in a world of pixels forced designers into a choice between vector graphics and the tapestry world of pixels. Scalable vector graphics, designed with Bezier spline curves spaced at close enough intervals to create nuanced transitions from thin to thick, swelling shapes, and neatly nipped points of connection between strokes, required a great deal of memory. Displaying them on the screen, except while they were being designed, was probably not worth the processing and storage overhead. But each rendering of a letter required that the file be stored and processed to be seen. Even on a single machine, the existence of a letter had stretched along a line of contingent connections.

The choice of what to store—a glyph outline, curves and points, or a pixel pattern—had consequences at every step of the font’s life cycle. The creation of TrueType,3 Apple’s way to store outline fonts, was meant as competition for Adobe’s Type 1 fonts used in Postscript.4 The great advantage of TrueType and Adobe’s Type 1 fonts was scalability—the same design could be redrawn at any size, or so it seemed (Phinney 2011). Experiments with multiple-master fonts, such as Minion, played (briefly) with the computer’s capacity for morphing letterforms across a set of variables. But each size of a font requires rework, and in the digital design world, this is referred to as hinting. Questions of where and how a printing language lived—whether it was part of an operating system or part of applications or the software associated with particular printers—had market implications because every device had to be equipped with the ability to recognize a font file format if it were to render the letters accurately. Agreeing on a single standard for type formats, Microsoft and Apple collaborated on TrueType in the early 1990s.

Fonts were sold as files, their screen, printer, and display elements packaged on disks and shipped. The size of font files was large; they took up lots of space in the hard drive. And changing fonts too many times in a single document could clog the output pipeline as a poor printer chugged away like a dancer required to change from tango to waltz and square dance moves at every step. On-screen display posed its own challenges. Counterintuitive though it may seem, in grayscale displays, blurring the font’s appearance through a process called anti-aliasing took advantage of the eye’s ability to create a sharper appearance.5 Up close, anti-aliasing looks fuzzy, out of focus, and ill defined, but from a reading distance, the letters look finer than in the high-contrast resolution of most screens. Likewise, the RGB lines of the monitor tended to be brutal to the nuances of typefaces. Subpixel rendering, another technique for improving the way screens display fonts, takes advantage of the ways the red, green, and blue lines of RGB displays can create their own refracted image so that it recombines into a sharp illusion, using the physical properties of the display to best advantage. These are technical problems, however, challenges for engineers and designers, not philosophers.

Or are they? Font designers speak of “character drift” when they talk about hinting, noting that the specific features that give a particular letterform its identity are at risk, losing their defining boundaries in the process. How long before the well-defined lowercase r looks like an i whose point has slipped? What is a letter in these conditions? Where is its identity held? And how is that identity specified? A basic font contains multiple tables that include both structured and unstructured data, the specific forms and shapes of icons, and instructions on how to talk to and work with a word processor or an application. The data about a font are merely stored until needed, held in the virtual memory awaiting a command.

Web-safe fonts are linked to the capacities of browsers to access font information stored in the operating system. Font files had to be loaded, rasterized, autohinted, and configured by the browser, translating a web page’s information using locally stored fonts and display capacity. If an unfamiliar font was referenced in an HTML file, a browser would use a default font in the desktop operating system. At first, HTML had no capacity to embed fonts. Fonts were controlled by the browser. Each browser had its own limitations and specifications. Designers were appalled. HTML, designed for “display,” had stripped out one of the fundamental elements of all graphic design—font design. The crudeness of HTML’s system—simply regulating relative sizes of the handful of fonts available for web pages—was extremely reductive but very powerful. By ignoring the information required for font description, HTML could concentrate on other aspects of display in the early days of low bandwidth and still primitive browser capability. For designers, this was the equivalent of forcing a world-class concert pianist to perform Rachmaninoff on a child’s toy piano of a dozen dull keys. By 1995, Netscape introduced a tag that allowed a designer to specify that a particular font was to be used in a web display, but this required that the viewer have that exact font installed. Again, a series of default decisions could revert to the system fonts if that font were not present.

Early cascading style sheets provided the means of specifying qualities or characteristics of fonts to be displayed. They allowed for five parameters to be indicated: family (sans serif, serif, monospace, cursive, fantasy), style, variant, weight, and size. Again, an analogy is in order. Every vanilla ice cream is not the same as any other. The nuances of design were rendered moot through the need to invoke generic, rather than specific, fonts. The distinctions among different renderings of various fonts, crucial to their history and development, could not register in such a crude system of classification or description. Designers, already annoyed at the nonspecific crudeness of HTML code, had more to complain about as they watched their Gill Sans turn into Arial and Hoefler Text or Mrs. Eaves rendered as Times New Roman.

The most recent attempt to resolve these issues, Web Open File Format (WOFF), shifts font storage responsibility to a server from which the information can be called, like an actor on permanent alert for casting and appearances onstage. The notion of WOFF depends on a continuous connection to the web so that any instance of that font will be rendered from files provided on a call. The complexity of the files—suited to a variety of browsers, screens, displays, and output devices, including printers—comes from the need to coordinate the consistent identity of the font across each stage of its design, storage, transfer, processing, restorage in memory, display, output, and so forth. The notion of distributed materiality, developed by Jean-François Blanchette (2011), provides a useful description of these interdependent relations of memory, processing, storage, networks, and other features of the computational apparatus brought into play when a letter’s existence is distributed and contingent.

A letter? Imagine the lines of code, contingencies, forks, and options built into each glyph’s back story and front end. The skill of the punch-cutter, so inspiring for its manual precision and optically exquisite execution, is now matched by an equally remarkable but almost invisible (code can read, after all, but is rarely displayed in public view) set of instructions that also encodes ownership, proprietary information, the track and trace of foundry source, and, ultimately, perhaps, a set of signals to trigger micropayments for use.6 The model of the letter is now an economic and distribution model as well as a graphical form with identifying characteristics. The letter performs across its roles and activities, with responsibility at every stage to conform to standards and protocols it encounters in every area of the web, to live on a server without becoming corrupted or decayed, to enact its formal execution on command no matter who or what has called it into play. These are megafiles with meta-elements, wrapped with instructions for behaviors as well as form. For the font storage in our times reinforces the performative dimensions of identity, of a file that is a host of latencies awaiting instantiation (Drucker 2009). The condition exposes the relation between modes and their embodiment, even as it belies the possibility of a disembodied letter.
If a letter is an entity that can be specified to a high degree and yet remain implementation independent, what does that mean (Glaves 2006)? The term implementation independent recalls another moment, some years ago, when a young scholar of modern poetry interested in typographic specificity in editions of various works asked me, “Does a letter have a body? Need a body?” I had never pondered the question in quite that way. Where to look for an answer to this metaphysics of form? In the essay where he speculated on the origins of geometry, Edmund Husserl provided a highly nuanced discussion of the relation between the “first geometer” and the ideal triangles and other geometric forms whose existence he brings us to consider as independent of human apperception and yet fundamentally created within the constitutive systems of representational thought (Husserl 1989). But letters are not like triangles. Their proportions and harmonies are neither transcendent nor perfect in a way that can be expressed in abstract formulae. Between the opening gambit of the question of embodiment and the recognition of the fundamental distinction between mathematically prescribable forms that conform to universal equations that hold true in all instances lies the path from A to screen. For if all circles may be created through the same rules with respect to center point, radius, and circumference, letters cannot be so prescribed. In fact, the very notion of pre-scription, of a writing in advance, has philosophical resonance.

Approaching this as a practical problem brought metaphysics into play through the work of Donald Knuth in the late 1970s (Knuth 1986). The mathematician and computer-programming pioneer had sought the algorithmic identity of letters. And in the course of failing to find them, he provided revealing insight into the complexity of their graphical forms. Frustrated by the publication costs involved in production of his books, Knuth wanted to use computer-generated type to set his mathematical texts. By 1982, he had come up with the idea of a meta-font to take advantage of the capabilities of digital media to generate visual letterforms and equations. At the core of the typesetting and layout programs he designed, TeX and Metafont, was a proposition that a single set of algorithms could describe basic alphabetic letters. As Douglas Hofstadter (1985, 266) later commented, Knuth held out a “tantalizing prospect . . . : that with the arrival of computers, we can now approach the vision of a unification of all typefaces” (see Figure 4.2).



FIGURE 4.2. Donald Knuth, The METAFONTbook. 1986.



In essence, the “meta” aspect of these fonts was contained in a vision of a flexible design capable of creating “a 6 1/7-point font that is one fourth of the way between Baskerville and Helvetica” (Hofstadter 1985, 266). Knuth’s assumption was that all alphabetic letters—regardless of their font—could be described as a fixed, delimited, finite set of essential forms. For an a to be described according to its features such that the “Helvetica-ness” of one version could be mixed with “Baskerville-ness” in another would require that these stylistic characteristics be specifiable (distinct and discrete) and modifiable (available to an algorithmic specification and alteration). Hofstadter argued that letters belong to what are called productive sets, sets that cannot ever be complete because they are by definition composed of each and every instantiation (chairs are another example of a productive set). Not only do an infinite number of a’s exist, but determining precisely what their common features are—or the properties of consistency, in mathematical terms—turned out to be more complicated than Knuth had imagined.

Knuth was having these realizations in 1982, as desktop publishing and design were just coming into view over the horizon. The idea of generating fonts from a set of simple instructions, parameters describing each letter, seemed just the right fantasy to entertain because it aligned with the optimism about the capacities of computational and digital media. Conceiving the essence of a letter as an algorithm and proposing a meta-font are ideas with a direct connection to digital media’s perceived capabilities. As it turned out, no single essential a exists. A swash letter majuscule A in a wildly excessive script face will have elements that could never be predicted from an algorithm responsible for the minimal stroke forms of a three-stroke sans serif A.

Knuth’s concept of letters had everything to do with the technology in which he was imaging their production. The idea of treating letters as pure mathematical information had suggested that they were reducible to algorithmic conditions outside of instantiation. He had believed that the letters did not need a body. Clearly the technology of production and the frameworks of conception both contribute to our sense of what a letter is. The technology of digital media couldn’t produce a genuine meta-font, but in the context of computational typesetting and design, such an idea seemed feasible.

This example undercuts any technodeterminist view of letter design or conception. The limit of what a letter can be is always a product of the exchange between material and ideational possibilities. Sometimes technology leads, sometimes not. If we consider the history of letterforms, we can see many moments when one (technological development or conceptual leap) gets out of synch with the other. The serifs of Giambattista Bodoni’s eighteenth-century rational modern designs, meant to embody mathematical proportion, reason, and stability in metal type, were prone to break (Updike [1922] 1962; Drucker and McVarish 2009). These forms were better suited to production through photocomposition or digital modes in which their visual fragility could be countered by a technological robustness, but like the sparkling types of Baskerville, they suited an era in which refinement of taste was linked to virtuosic skills that flaunted the ability of punch-cutters to compete with the flourishes of master penmen. Style concepts were supported by technological innovation, not the reverse, so the printer from Birmingham had his paper “calendared” to make its surfaces smooth from the pressure of heated plates so it would support the finest possible print impressions.

Business models play their own part in the creation of concepts on which letters were and are imagined, and the most recently invented standard for letterforms, to support the storage of fonts on the cloud, is as much an opportunistic effect of markets as it is an effect of design drives or desires, because liberating fonts from their licensing agreements and foundries could be one of the unintended (or perhaps deliberate) consequences. Neoliberal notions of property, the effacement of labor and authorship and redistribution of economic benefit, are bound up in the new concept of Web Open Font Format (WOFF).

Hyperrationalization of letterforms is not exclusive to the realm of digital technology. Approaches to the drawing of letters in elaborate systems has several notable precedents. Renaissance designers Giambattista Palatino, Giovanni Tagliente, and Fra Luca de Pacioli, imagining a debt to the classical forms of Roman majuscules, strove not merely to imitate the perfection of shape and proportion they could see in the giant inscriptions of the Forum but to conjure a system that could embody principles for their design (Rogers 1979; Morison 1933) (see Figure 4.3). The letters on Trajan’s column, erected in A.D. 113, provided (and provide, for these aspirations continue well into the twenty-first century) the stand-out example of Roman type design. The letterforms in the inscription seemed to express order and beauty in balance, a kind of majesty that was at once imperial in its strength and classical in its humanity. The myth of compass and straight-edge as the tools for their production persisted until the twentieth century, when Father Edward Catich’s close observation showed that the quirks of form are too subtle to be traced to mechanistic instruments (Clough 2011).7 The Romans drew their letters onto stone and then included the subtle alterations of shape that were the result of gestures in their carving. According to the Encyclopaedia Romana, Catich



FIGURE 4.3. Fra Luca Pacioli, De Divina Proportione. 1509.


hypothesized that the forms first were sketched using a flat square-tipped brush, using only three or four quick strokes to form each letter, the characteristic variations in line thickness formed by the changing cant of the brush. The letters then were cut in the stone by the same person (and not, Catich contended, separately by scribe and stone mason), the illusion of form being created by shadow.8



Concepts of the alphabet do not walk lockstep with the history of technological change. Early carvers, scraping the letters as best they could into the surface of rock, made shapes based on models they knew—how? Was seeing enough to imprint that memory on the Neolithic imagination? Or did these scribes carry a cheat sheet of some kind, a rock or bit of clay or papyrus with the signs carved in them? Who taught these scribes? We know of the schools at Nineveh and throughout the ancient Near East (Robson 2009); of the training of Egyptian scribes and Babylonian ones; of the measured lines of Hammurabi’s stele from about 1780 B.C., its regular and elegant cuneiform speaking volumes about the habits of writing and the expertise of its inscriber and anticipated readers. But literate culture was fully developed, part of administrative systems of power and purpose in those regions. The Sinai inscriptions are in a remote region. Who knew the alphabet—this new set of glyphs—well enough to hold it in mind and pass it on as a fixed sequence of letters with stable names and graphic identities? The models of letter making that appear on Greek vases—the Dipylon vase is a celebrated example—would have been painted from familiar models, as a set of strokes that comprised fixed patterns. The increasingly abstract forms that became Greek writing have no easy mnemonic to guide their creation, as the aleph-oxhead resemblance did for earlier scrapers in stone. Atomists are convinced of the power of letters whether in a mystical or instrumental sense. Empirical archaeologists are equally determined to piece together a single binding narrative from physical remains, the genealogical seekers after the origins and mutations as if in a single organic tree of influence spreading like some alluvial fan across the world from a single source. Mystics believe symbolic tales adhere to the signs (or inhere in them) that are a mystery to unravel or contemplate. In the technological inventory of conceptions, another set of frameworks prevails.

But when Renaissance artists looked to their antique past, they made every effort to create perfect forms. Their worldview, figured in ideal shapes as much an expression of cosmological belief in the perfection of divine proportions and design, depended on regular circles and squares. The aforementioned Pacioli’s De Divina Proportione (1509) bears some of the same conspicuous graphic features as the diagrams of Tycho Brahe in his explanations of the apparent motion of the planets. As an expression of a divine design, these must comprise perfect forms, and the circles within circles elaborately constructed by Tycho to describe elliptical motions are as mistaken in their mathematical principles as are Pacioli’s attempts to redraw Roman letters by the same means (Koestler 1959). Other extreme approaches to rationalized design include the elaborate calculations produced at the very end of the seventeenth century on the order of the French King Louis XIV. The resulting Roman du Roi, on its grid of perfect squares sanctioned by a large committee of experts, proved equally sterile, static to the eye. Modified by the punch-cutter Phillippe Granjean, the designs of the monarch’s font, held by exclusive license for his use, were modified to subtle effect in the production process. Matter triumphed over idea, and the hand and body over intellection, as the experienced type designer recast the ideal forms into realized designs that passed as perfect.

Just as reason and proportion could be pressed into design service by the rationalizers of form, other approaches to production had their own systematic foundation. The discipline of the hand, conformance to a stroke pattern, ability to execute a series of marks in fixed, controlled sequence that become letters on a page—these were the skills of the writing master. Ink from a pen, paint from a brush, graphite from a pencil are all technologies that require training to be perfected. The virtuoso capacities of the writing masters of the seventeenth and eighteenth centuries will rarely be equaled or surpassed in our lifetime (Bickham [1743] 1941; Hoefnagel 2010). Those well-orchestrated moves, and the training necessary to achieve them, are part of a cultural pattern long past. But in the process of creating their own materials for training, the teachers of handwriting conceived simple charts of the basic strokes from which any and all letters might be derived. A curious combination of atomistic elements and somatic gestures, this approach is ductal, or gestural, in its foundation. The letters are more like traces of dance steps than ideal forms, their beauty achieved by schooling the fingers, wrist, and arm to act together in a well-regulated performance (see Figure 4.4).



FIGURE 4.4. George Bickham, The Drawing and Writing Tutor. Or an alluring introduction to the study of those sister arts. 1740.



In fact, every technology suggests possibilities for letterform designs: clay and stylus, brush and ink, drawing pens and vellum, metal type, steel engravings, paper and pencil, ballpoint, photography and photomechanical devices, digital type. But letter design isn’t simply determined by technology. Gutenberg’s metal characters took their design from preexisting handwritten models just as surely as photocomposition copied the design of hot metal fonts—despite the unsuitability of these models to the new media. Finding a vocabulary for a new technology takes time. The aesthetics of a medium aren’t in any way self-evident—or immediately apparent. How was the metal-ness of metal to be made use of with respect to the design of letterforms (Updike [1922] 1962)? The inherent aesthetics of phototype and its ability to carry decorative and pictorial images, drawn and fanciful forms; to be set so close the letters overlap; or to be produced with a bent and curved film strip through which light shining made an anamorphic or distorted form—all this had to be discovered. Similarly, in the simulacral technologies of digital media, where shape-shifting and morphing are the common currency of image exchange, what defines the technological basis of an aesthetic—the capacity for endless invention and mimicry or the ability to create a randomization in the processing that always produces a new alteration in each instantiation?

But conceptual shifts across historical moments and geographies arise from other impulses than technological change, and this is crucial. The conception of forms can be realigned radically when explanatory narratives change. The eighteenth-century scholars L. D. Nelme and Roland Jones, for instance, struggling with Irish national identity and its roots in the prehistory of the Celts, took apart the alphabet and demonstrated its foundations in myths of origin that supported their claims to cultural autonomy. The letters were a code on which policies and politics might be played out in a contemporary arena. Published in London in 1772, Nelme’s An Essay towards an Investigation of the Origin and Elements of Language and Letters employed an old theory, that of two essential forms of the I and O as the radicals from which the ten signs of the ante-diluvian language used by the Japhetans could be derived. The original Europeans, according to Nelme, their language, were not confused at Babel. His reading of the O and L as O-L, the ALL, was put to the purpose of showing that the Celts were the chosen people. Nelme was repeating the theme of Jones’s 1764 Origin of Language, which traced the alphabet to the ancient tongue of the Cumbri-Gallic Celts, a point of view taken up again by Charles Vallencey in 1802 in his publication Prospectus of a Dictionary of the Language of Aire Coti, or, Ancient Irish, compared with the Language of the Cuti, or Ancient Persians, Hindoostanee, the Arabic, and Chaldean Languages. The point is simply that what drove the analysis of the forms of letters was a powerful belief system rooted in exigencies that were sufficiently compelling in their moment that they explained the alphabet as a vivid demonstration of principles. They cannot be characterized as wrong any more than Knuth’s quest for a meta-font. Each is an accurate expression of belief. No era has an exclusive claim on blindness to its assumptions.

A letter, like any other cultural artifact, is designed according to the parameters on which it can be conceived. If we imagine, for instance, that a letterform may be shaped to contain an image with a moral tale, or that the letters of the alphabet comprise cosmological elements, or national histories, then designs that realize these principles may be forthcoming no matter what the material. Similarly, a twentieth-century modern sensibility inclined to seek forms within the aesthetic potential of materials of production was satisfied by designs that brought forth the qualities of smoothly machined curves such as those typical of Deco forms. Obviously, in any historical moment, competing sensibilities and conceptualizations exist simultaneously. The modernity of Paul Rand exists side by side with the pop sensibility of Milton Glaser and the 1960s psychedelic baroque style of poster designer Victor Moscoso (Heller and Chwast 2000). Can anything be said to unify their sensibilities? At the conceptual level, yes—a conviction that custom design and innovation are affordances permitted, even encouraged, in an era of explosive consumer culture. Thus, for our moment, the need to use the discrete properties of the alphanumeric code allows us to press its arbitrary signs into the service of a World Wide Web whose Western roots thus connect it to antiquity.

The design of optical character recognition (OCR) systems divides along conceptually familiar lines. Print materials are analyzed by imagining their characters are pictures, distinguished one from another by graphical features. By contrast, handwriting samples are studied by capturing the sequence of events in their production. OCR was already dreamed of by the visionary entrepreneurial inventors of the mid- to late nineteenth century, who hoped they might create a machine capable of reading to the blind. Not until the mid-twentieth century was the first actual “reading machine” produced. The technological capacity for reliable machine reading progressed according to various constraints. Initially fonts were designed to suit the capacities of machine reading, with a feature set that occupied distinct areas of a grid. This allowed a unique, unambiguous identity to be ascribed to each letter through a scanning process, without the need for sophisticated feature extraction or analysis (the font was essentially a code, though it could be read by humans as well as machines). Technological capacities have become more sophisticated and no longer require use of a specially designed font, but some of the processes of pattern recognition in these earliest systems are still present. These basic techniques rely on thresholding (sorting characters from background), segmentation (into discrete symbols), zoning (dividing an imaginary rectangle enclosing the letter so that points, crossings, line segments, intersections, and other elements can be analyzed optically), and feature extraction and matching (against templates or feature sets). Problems occur when letters are fragmented, much visual noise is present, or letters overlap or touch other elements in the text or image. The feature extraction can be complemented with natural language processing, gauging the probability of the presence of a word according to statistical norms of use, frequency, or rules of syntax. In handwritten specimens, the basic act of segmentation poses a daunting challenge, and therefore the automated reading of handwriting is most effective when working with samples that store the production history of strokes or the act of a letter’s coming into being. Thus the two approaches to OCR align with earlier conceptions of the letter—as either an image with distinctive features or as the result of a series of motions resulting in strokes.

The same two variables are at play, then, no matter what the individual style or cultural trend: the materials of production and the conception of a letter. Even at the level of production, where identity seems self-evident, a host of different concepts about what a letter is can be identified. In overview, we can say that a letterform may be understood graphically as a preexisting shape or model, a ductal form created by a sequence of strokes with varying pressures, an arbitrary sign, an image fraught and resonant with history and reference, an arrangement of vectors or pixels on a screen, or a digital file capable of being manipulated as an image or algorithm. Conversely, in conceptual terms, we understand a letter as a function of its origins (born in flames, traced on wet sand, marked by the hand of God on tablets of stone, reduced from images, invented out of air, scratched in rocks), its value (numerical, graphical, pictorial), its form (style, history, design), its functionality (phonetic accuracy, legibility), its authenticity (accuracy), or its behaviors (how it works). Other conceptualizations might be added to this list.

Therefore asking “what is a letter?” is not the same as asking “how is it made?” The answers are never in synch—the questions don’t arise from a common ground—but they push against each other in a dynamic dualism. The road from invention to the present existence of letters is a dialogical road, not a material one. The milestones are not a set of dates with tags: creation of the quill pen, invention of printing, or production of the punch-cutting machine. Important as these technological developments are to the transformation of designs in physical, material form, they are modified in relation to the conceptual schema by which the letters may be conjured as forms. The letters did not migrate from A to screen. Atomistic in their operation, contingently distributed in their constitution, and embodied in their appearance, they have had to be reconstituted, remade in their new environment, still mutable after four thousand years of wandering.


Notes



1 I am indebted to Jason Taksony Hewitt for conversations that have informed my understanding of these issues.





2 Basic books on the alphabet from an archaeological point of view include Sampson (1985), Driver (1976), Healy (1990), McCarter (1975), Gardiner and Peet (1952–55), and Diringer (1943).





3 For an explanation, see http://computer.howstuffworks.com/question460.htm. See also http://www.truetype-typography.com/articles/ttvst1.htm.





4 http://en.wikipedia.org/wiki/Typography_of_Apple_Inc.





5 http://en.wikipedia.org/wiki/Font_rasterization





6 http://www.fastcodesign.com/1671073/algorithmic-typography-crafted-entirely-with-computer-code#1.





7 See also http://penelope.uchicago.edu/048grout/encyclopaedia_romana/imperialfora/trajan/column.html.





8 http://penelope.uchicago.edu/050grout/encyclopaedia_romana/imperialfora/trajan/column.html.





References

Albright, William Foxwell. 1966. The Proto-Sinaitic Inscriptions and Their Decipherment. Cambridge, Mass.: Harvard University Press.

Bernal, Martin. 1990. The Cadmean Letters. Winona Lake, Ind.: Eisenbrauns.

Bickham, George. (1743) 1941. The Universal Penman. New York: Dover Reprint.

Blanchette, Jean-François. 2011. “A Material History of Bits.” Journal of the American Association for Information Science and Technology 62, no. 6: 1042–57.

Catich, Edward M. 1968. The Origin of the Serif: Brush Writing and Roman Letters. Davenport, Iowa: Catfish Press.

Clough, James. 2011. “James Clough’s Roman Letter.” http://soulellis.tumblr.com/post/683637102/james-cloughs-roman-letter.

Colapietro, Vincent Michael, and Thomas M. Olshewsky. 1996. Peirce’s Doctrine of Signs. New York: Mouton de Gruyter.

Cottrell, Leonard. 1971. Reading the Past. New York: Crowell Collier Press.

Diringer, David. 1943. “The Palestinian Inscriptions and the Origin of the Alphabet.” Journal of the American Oriental Society 63, no. 1: 24–30. http://www.jstor.org/pss/594149.

Driver, Godfrey. 1976. Semitic Writing from Pictograph to Alphabet. London: Oxford University Press.

Drucker, Johanna. 1994. The Alphabetic Labyrinth: The Letters in History and Imagination. New York: Thames and Hudson.

———. 2009. “From Entity to Event: From Literal Mechanistic Materiality to Probabilistic Materiality.” Parallax. http://indexofpotential.net/uploads/1123/drucker.pdf.

Drucker, Johanna, and Emily McVarish. 2009. Graphic Design History: A Critical Guide. Upper Saddle River, N.J.: Pearson Prentice-Hall.

Frost, Samantha, and Diana Coole. 2010. New Materialisms. Durham, N.C.: Duke University Press.

Gardiner, Alan, and T. Eric Peet. 1952–55. Inscriptions of Sinai. London: Egyptian Exploration Society.

Gardner, Martin. 1982. Logic Machines and Diagrams. Chicago: University of Chicago Press.

Glaves, Mason. 2006. “The Impending ‘Implementation Independent’ Interface.” http://weblogs.java.net/blog/mason/archive/2006/06/the_implending.html.

Havelock, Eric. 1976. Origins of Western Literacy. Toronto: Ontario Institute for Studies in Education.

Healy, John. 1990. The Early Alphabet. Berkeley: University of California Press.

Heller, Stephen, and Seymour Chwast. 2000. Graphic Style. New York: Harry N. Abrams.

Hoefnagel, Joris. 2010. The Art of the Pen. Los Angeles, Calif.: Getty.

Hofstadter, Douglas. 1985. “Metafont, Meta-mathematics, and Metaphysics.” In Metamagical Themas, 266. New York: Viking.

Husserl, Edmund. 1989. The Origin of Geometry. Lincoln: University of Nebraska Press.

Kinross, Robin. 1992. Modern Typography. London: Hyphen Press.

Knuth, Donald. 1986. METAFONT: The Program. Reading, Mass: Addison-Wesley.

Koestler, Arthur. 1959. The Sleepwalkers. New York: Grosset and Dunlap.

Logan, Robert. 1986. The Alphabet Effect. New York: William Morrow.

McCarter, P. Kyle, Jr. 1975. The Antiquity of the Greek Alphabet and the Early Phoenician Script. Cambridge, Mass.: Scholars Press for Harvard Semitic Museum.

Morison, Stanley. 1933. Fra Luca de Pacioli. New York: Grolier Club.

Naveh, Joseph. 1982. Early History of the Alphabet. Jerusalem: Magnes Press.

Petrie, W. M. Flinders. 1912. The Formation of the Alphabet. London: Macmillan.

Phinney, Thomas. 2011. “True Type and Post ScriptType 1: What’s the Difference?” http://www.truetype-typography.com/articles/ttvst1.htm.

Ploudre, Jonathan. 2011. “Macintosh System Fonts.” http://lowendmac.com/backnforth/2k0601.html.

Re, Margaret. 2003. Typographically Speaking: The Art of Matthew Carter. New York: Princeton Architectural Press.

Robson, Eleanor. 2009. “The Clay Table Book in Sumer, Assyria, and Babylonia.” In A Companion to the History of the Book, edited by Simon Eliot and Jonathan Rose, 67–94. Malden, Mass.: Wiley-Blackwell.

Rogers, Bruce. 1979. Paragraphs on Printing. New York: Dover Reprint.

Sampson, Geoffrey. 1985. Writing Systems. Stanford, Calif.: Stanford University Press.

Sanders, Seth. 2009. The Invention of Hebrew. Urbana: University of Illinois Press.

Senner, Wayne M. 1989. The Origins of Writing. Lincoln: University of Nebraska Press.

Taborsky, Edwina. 1998. Architectonics of Semiosis. New York: Palgrave Macmillan.

Tory, Geoffroy. 1529. Champfleury. Paris.

Updike, Daniel Berkely. (1922) 1962. Printing Types: Their History and Use. Reprint, Cambridge, Mass.: Belknap Press.

Vallencey, Charles. 1802. Prospectus of a Dictionary of the Language of Aire Coti, or, Ancient Irish, compared with the Language of the Cuti, or Ancient Persians, Hindoostanee, the Arabic, and Chaldean Languages. Dublin: Printed by Graisberry and Campbell.

How to Format Lyrics:

  • Type out all lyrics, even repeating song parts like the chorus
  • Lyrics should be broken down into individual lines
  • Use section headers above different song parts like [Verse], [Chorus], etc.
  • Use italics (<i>lyric</i>) and bold (<b>lyric</b>) to distinguish between different vocalists in the same song part
  • If you don’t understand a lyric, use [?]

To learn more, check out our transcription guide or visit our transcribers forum

About

Have the inside scoop on this song?
Sign up and drop some knowledge

Q&A

Find answers to frequently asked questions about the song and explore its deeper meaning

Credits
Release Date
January 1, 2014
Tags
Comments