3.6 The Influence of New Technology
Learning Objectives
- Determine the benefits and drawbacks of digital libraries.
- Define print-on-demand and self-publishing.
The book industry has changed enormously since its creation. From the invention of the papyrus scroll to the introduction of the e-book, new technologies continuously affect how people view and experience literature. With the advent of digital media, old media industries, such as the book industry, must find ways to adapt. Some fear that this new technology will destroy the industry, while others maintain that it works to the industry’s advantage. No one seems to disagree—digital technology promises to reshape the publishing industry as we know it.
E-Books
The first e-book readers operated as personal digital assistant (PDA) devices, pocket-sized electronics that could store and display large amounts of text, that peaked in popularity during the 1990s. However, early e-book readers lingered on the market, popular in certain techy niches but unable to gain traction with the wider population. Early e-readers had minimal battery life and readers found the text difficult to read. Through the 2000s, technological advances allowed for smaller and sleeker models. The Apple iPhone and the iPad helped make readers more comfortable with reading on a small screen. The second half of the decade saw the release of many dedicated e-readers. The technology got a boost when Oprah Winfrey praised the Kindle on her show in October 2008. By that holiday season, e-book reader sales skyrocketed amongst the general public, and it wasn’t just the technologically savvy individuals who were interested anymore. Despite criticisms by some as providing an inferior reading experience to dedicated e-readers, the Apple iPad became a powerful driving force behind e-book sales—people downloaded more than 1.5 million books on the Apple iPad during its first month of release in 2010 (Maneker, 2010).
E-books now account for a substantial portion of the overall book market. Estimates suggest that they make up around 30-40% of the overall book market. Amazon, as the leading e-book retailer, offers a vast catalog of titles, with millions of books available for Kindle devices and other compatible platforms. Some devices offer wireless accessibility, meaning that an e-reader doesn’t have to connect to a computer to access titles; it only requires an open Wi-Fi connection. With access to a dazzling array of books available with just a few clicks, contemporary consumers seem enamored with the e-book. An e-book reader has the space to store thousands of titles in an object smaller and lighter than the average hardcover novel. And though the devices themselves involve a steep initial investment, e-books provide cheaper reading material than their hardcopy equivalents; sometimes they come without a cost. Thanks to efforts like the Gutenberg Project and Google Books (see the following section), more than a million public domain titles are available as free e-books.
Anything that gets people excited about books and reading should benefit the publishing industry, right? Unfortunately for U.S. publishers, it does not work that way. Some publishers worry that e-book sales may end up hurting their bottom lines. During the Kindle’s first year, Amazon essentially set the standard price for bestselling or new-release e-books at $9.99. Since Amazon acted as a wholesaler and bought these books for half the publisher’s list price—generally around $25 for a new hardcover—the company sold these titles at a loss. However, for Amazon, a short-term loss might have had long-term payoffs. In recent years, Amazon has maintained a dominant position in the e-book market, with a market share estimated to be around 30-40%. This dominance has led to concerns among traditional publishers about the potential impact on hardcover and trade paperback sales. As e-books often offer more affordable pricing options, there is a risk that consumers may be less willing to pay higher prices for physical copies.
In January 2010, the conflict between Amazon and the publishing establishment came to a head. Macmillan, one of the six major publishing companies in the United States, suggested a new business model to Amazon, one that resembled the deal that the Big Six publishers had worked out with Apple for e-book sales on the Apple iPad. Essentially, Amazon could purchase books from publishers at wholesale rates—half the hardcover list price—and then set whatever retail price it wanted. This allowed Amazon to choose to sell books at a loss in the hope of convincing more people to buy Kindles. Macmillan proposed a system in which Amazon would act more as a commission-earning agent than a wholesaler. In Macmillan’s proposed model, the publisher would set the retail price and take 70 percent of each sale, leaving 30 percent for the retailer. Macmillan couldn’t force Amazon to agree to this deal, but the publisher could strike a hard bargain: If Amazon refused Macmillan’s offer, it could still sell Macmillan titles under the wholesale model, but the publisher would delay e-book editions for 7 months after hardcover releases. A standoff followed. Amazon didn’t just reject Macmillan’s proposal; it removed the “buy” button from all Macmillan books listed on its website (including print books), essentially refusing to sell Macmillan titles. However, after a few days, Amazon capitulated and agreed to Macmillan’s terms, but not before issuing a strongly worded news release claiming that they agreed to sell Macmillan’s titles “at prices we believe are needlessly high for e-books,” because “Macmillan has a monopoly over their own titles (Rich & Stone, 2010).” Still, Macmillan and the other publishers seem to have won this battle: Amazon agreed to price e-books for most new fiction and nonfiction books for adults at $12.99 to $14.99, though they would sell best sellers for $9.99 (Rich & Stone, 2010). The dynamic between publishers and retailers has evolved since 2009. While publishers initially sought to maintain higher e-book prices, the competitive landscape and consumer preferences have led to more flexible pricing models. Today, e-book prices vary significantly depending on factors such as the publisher, genre, and age of the book. While some publishers may still negotiate for higher prices for new releases, the overall trend has been towards more competitive pricing and a wider range of options for consumers.
But the $10 book may represent the least of the publishing industry’s worries. At the start of 2010, more than half of the bestselling titles on Kindle were available for free. Some of these titles included public domain novels such as Pride and Prejudice, but the service also gives away books by living authors for promotional purposes. The industry hasn’t yet come to a consensus about the utility of free e-books. Some publishers consider it a practice that devalues books in the eyes of customers. “At a time when we are resisting the $9.99 price of e-books,” David Young of the Hachette Book Group told The New York Times, “it is illogical to give books away for free (Rich, 2010).” Other publishers consider free e-books a promotional tool to build word-of-mouth and to introduce readers to new authors. While the availability of free e-books has fluctuated over the years, it remains a common practice for authors and publishers to offer free or discounted titles to promote new releases, build readership, or participate in marketing campaigns. While the exact percentage may vary, there are still numerous free e-books available on platforms like Amazon Kindle, often as part of promotional offers or subscription services.
Other e-books emerge from outside the traditional publishing system. Four of the five bestselling novels in Japan in 2007 consisted of cell phone novels, books both written and intended to be read on cell phones. Cell phone novels are traditionally written by amateurs who post them on free websites. Readers can download copies at no cost, which means no one makes much of a profit from this genre. Although the phenomenon has not caught on in the United States yet, some publishers fear the cell phone novel as a further sign of the devaluation of books in a world where browsers expect free content.
While the e-book market has experienced significant growth in its infancy, the pace of expansion has slowed in recent years. While e-books now account for a substantial portion of book sales, the overall market has become more saturated. Factors such as the economic climate, technological advancements, and consumer preferences will continue to shape the future of the e-book industry.. Some people have theorized that e-readers will lead to an increasing popularity of the short story, which readers can buy and read in short increments. Others have claimed that they’ll destroy the book industry as it currently stands. Whatever the future of books looks like, everything—from the production of books to the way people read them—continues to change rapidly because of new technologies.
Digitizing Libraries
The idea of a digitized library has gained weight since the early years of the Internet. A digital library stores its materials in a digital format, accessible by computers either locally or remotely through a computer network. Michael Hart founded Project Gutenberg, the oldest digital library, in 1971, three years before the Internet went live. Hart inspired to make 10,000 of the most-consulted books publicly available and free by the end of the century. The forward-thinking Hart named his project after the inventor of the movable type printing press, perhaps realizing that book digitization had the potential to revolutionize the way humans produce and read books as much as Gutenberg’s invention had centuries earlier. At first, Hart and his fellow book-digitizing volunteers found the process slow because they needed to copy the text manually until 1989. In the early 1990s, scanners and text-recognition software allowed them to somewhat automate the process.
Fast-forward to modern times. Project Gutenberg’s free online library boasts more than 70,000 public domain works available for free download. Stanford University uses a robotic page-turning scanner machine to digitize 1,000 book pages an hour. Stanford’s partner in digital library production is Google Books, which has scanned over 40 million books since it began Google Books in 2004. A Chinese company claims to have digitized more than half of all books that have been published in Chinese since 1949. In 2006, The New York Times estimated that humans have published at least 32 million books throughout history; the huge push for book digitization makes it seem entirely possible that nearly all known books could exist in a digital format by 2060 (Kelly).
Some liken the prospect of widely accessible and easily searchable free libraries to the proliferation of brick-and-mortar libraries in the 19th century, which led to a surge in literacy rates. One of Project Gutenberg’s stated goals is “to break down the bars of ignorance and illiteracy” through its library of digitized books (Hart & Newby, 2004). Digital libraries make a huge selection of texts available to people with Internet access, giving them the amazing potential to democratize knowledge. The general manager of Adobe’s e-publishing business Bill McCoy told The New York Times in 2006, “Some of us have thousands of books at home, can walk to wonderful big-box bookstores and well-stocked libraries and can get Amazon to deliver next day. The most dramatic effect of digital libraries will be not on us, the well-booked, but on the billions of people worldwide who are underserved by ordinary paper books (Hart & Newby, 2004).” Digitized libraries can make fragile materials available to browsers without damaging originals; academic libraries could share important texts without shipping books across the country.
Coincidentally, an academic institution does not run one of the largest online libraries: Google Books. The bulk of free digital books available from Google Books or elsewhere come from the public domain, which constitutes approximately 15 percent of all books. Google Books has made over a million of these titles fully and freely searchable and downloadable. Other works in the Google Books digital library include in-print texts whose publishers have worked out a deal with Google. Some of these titles have their full text available online; others sites allow only a limited number of page previews. As part of its partnership with publishers, a Google Books search result will often provide links to the publisher’s website and to booksellers.
Google Books ran into trouble, however, when it began to digitize the millions of books with unclear legal status, such as out-of-print works that did not yet reside in the public domain. Many of these titles are considered orphan works, meaning that no one knows who owns their copyright. In 2004, the site announced plans to scan these texts and to make them searchable, but it would only show sentence-long snippets to searchers. Copyright holders could ask Google to remove these snippets at any time. Google claimed that this digitization plan would benefit authors, whose books would no longer linger in out-of-print limbo; it would also help researchers and readers, who would be able to locate (and perhaps purchase) previously unavailable works.
Publishers and authors did not agree with Google. Many objected to Google’s plan to scan first and look into copyright ownership later; others saw Google’s profiting from works still under copyright as a clear violation of intellectual property law. In 2005, the Authors Guild of America and the American Association of Publishers (AAP) sued Google for “massive copyright infringement.” Google argued that it essentially created a massive online card catalog; the Authors Guild and AAP alleged that Google had attempted to monopolize information and profit from it. In 2008, Google agreed to a $125 million settlement with the publishers and the Authors Guild. Some of that money would go directly to copyright holders; some would pay for legal fees; and some would go to found the Book Rights Registry, an independent nonprofit association that would ensure content users (like Google) paid copyright owners. Copyright owners would get money from Google and from potential book sales; Google would get money from advertisers, book sales, and institutional subscriptions by libraries.
Still, not everyone agreed with the decision. A diverse partnership of organizations, including Amazon, Internet Archive, and the National Writers Union, who fear that Google’s proprietary control of so much copyrighted material qualified as an antitrust violation, formed the Open Book Alliance. As the group stated on its website:
We will assert that any mass book digitization and publishing effort be open and competitive. The process of achieving this promise must be undertaken in the open, grounded in sound public policy and mindful of the need to promote long-term benefits for consumers rather than isolated commercial interests. The Open Book Alliance will counter Google, the Association of American Publishers and the Authors’ [sic] Guild’s scheme to monopolize the access, distribution and pricing of the largest digital database of books in the world.
Digital libraries need to formulate a solution for a concern addressed earlier in the chapter: digital decay. One librarian at Harvard University told The New York Times that “we don’t really have any methodology [to preserve digital material] as of yet…. We just store the disks in our climate-controlled stacks, and we’re hoping for some kind of universal Harvard guidelines (Cohen, 2010).” While significant advancements have been made in digital preservation technologies, the challenge of digital decay remains a pressing concern for libraries, archives, and other institutions that manage digital collections. Factors such as technological obsolescence, data corruption, and the long-term viability of storage media continue to pose risks to the preservation of digital content.
To address these challenges, organizations have adopted various strategies, including regular data migration, the use of long-term storage solutions, and adherence to preservation standards. However, the evolving nature of digital technologies and the constant emergence of new formats make it difficult to guarantee the long-term preservation of all digital materials.
Ongoing research and development are essential to develop more effective and sustainable digital preservation methods. Collaboration among institutions and the sharing of best practices can also contribute to addressing the challenges of digital decay.
Print-on-Demand and Self-Publishing
Gutenberg’s printing press revolutionized the world because it permitted the mass production of books. In medieval times, readers often commissioned a scribe to copy a text by hand, a process that could take months or even years. But despite their many conveniences, printed books carry their risks for authors and publishers. Producing books in bulk means that publishers take a gamble, attempting to publish enough books to satisfy demand, but not so many that unwanted copies linger unsold in warehouses. When a book doesn’t sell as much as expected, the publisher may end up taking a loss if the costs of publishing the book exceed the revenue from its sale. Interestingly, modern technology has made it feasible for some authors and publishers to turn to an updated version of the medieval model of producing books on demand for specific customers, allowing them to avoid the risk of carrying a large inventory of books that may or may not sell. Print-on-demand, a system in which publishers print a book only after the business has received an order for it, and the increasing trend of self-publishing may reshape the industry in the 21st century.
Self-publishing—a system that involves an author, not a third-party company, taking charge of producing and publishing a work—did not originate with digital formats. Many authors self-published works in their lifetimes, including Virginia Woolf and Oscar Wilde. Popular books like The Joy of Cooking and the Chicken Soup for the Soul series had their origins in self-publishing. Many authors also self-publish when they find themselves unable to get support from the traditional publishing world. Daniel Suarez’s techno-thriller Daemon got rejected by 48 agents before he opted for self-publishing. After creating interest on blogs, Suarez eventually got a two-book deal with Dutton, an imprint of Random House (McHugh, 2008). Additionally, self-publishing provides an attractive option for authors who want control over their content. Instead of leaving decisions up to the publisher, authors can control their editing, designing, and marketing.
Authors who choose to strike out on their own must overcome the stigma that sometimes gets attached to self-published books. Until recent years, most self-published authors went through the so-called vanity presses, which charge writers a premium for published copies of their books. As the name implies, these types of self-publishing ventures often preyed on writers’ need to see their own work in print. To justify the cost of printing, a standard order consists of a minimum order of a thousand copies, and unless authors found an audience for their book, they had little hope of selling them all. Because vanity presses exercise no quality control and usually publish anything for money, some readers grew skeptical of self-published books. Major retailers and distributors generally refused to carry them, meaning that authors had to rely on self-marketing efforts to sell their books. Before the advent of the Internet, this usually meant either selling copies in person or relying on mail-order catalogs, neither of which is a very reliable way to sell enough copies to recoup costs.
However, beginning in the early 2000s, self-publishing has changed dramatically. Advances made in publishing technology have made it easier for self-published books to more closely resemble traditionally published ones. Free professional typesetting software has allowed writers to format their text for the page; Adobe Photoshop and similar programs have made image editing and graphic design feasible for amateurs and professionals. The Internet has revolutionized marketing and distribution, allowing authors of books about niche subjects to reach a worldwide audience. As a result, many new Internet-based self-publishing companies have sprung up, offering a variety of services. Some companies, such as Lulu Enterprises and CreateSpace, feature a low-cost service without many bells and whistles; others like IngramSpark offer a package of services that may include professional editing, cover design, and marketing. The process has become streamlined as well. For example, to publish a book with Lulu, an author just has to upload a PDF of a properly formatted text file; decide what size, paper, and binding options to use; and make a cover using a premade template. Self-published books generally do not take long to produce and allow an author a higher share of the royalties, though it usually costs more on a per-book basis. As a result, self-published books often have a higher list price.
Whereas vanity publishers used to face stigmatization for charging authors sometimes thousands of dollars to publish their books, creating a book using the services of Lulu or CreateSpace doesn’t cost the author anything. That’s because users who upload their content do not create an actual, physical copy of a book. With print-on-demand technology, books do not get printed until a customer places an order, which significantly lowers the financial risk for self-publishers. Print-on-demand works especially well for books with a limited or niche audience. Both small presses and academic publishers use the technology to promote older books without much of an audience. With print-on-demand, books that may only sell a few dozen copies a year can stay in print without the publisher having to worry about printing a full run of copies and getting stuck with unsold inventory.
Although some self-published authors manage to find a huge audience, most don’t. Bob Young, the founder of Lulu, told the London Times that he set a goal to publish 1 million books that each sell 100 copies, rather than 100 books that sell 1 million copies each (Whitworth, 2006). Lulu and other enterprising self-publishers disrupt the traditional notion of the publishing house, which acted as a sort of gatekeeper for the book industry—ushering a few talented, lucky writers in and keeping others out. In the world of self-publishing, nothing stops a person from publishing a passion project—anyone with a book in a PDF file can offer a nice-looking paperback for sale in under 1 hour. This has democratized the industry, allowing writers who had received rejection notices from traditional publishers to find an audience on their own terms. But it has also increased the publication opportunities for a lot of writing with little literary merit. Additionally, if a best seller in the Lulu world only needs to sell 500 copies, as Bob Young told the London Times, then few authors are going to be able to make a living through self-publishing. Indeed, most of the self-publishing success stories involve writers whose self-published efforts sold well enough to get them a book deal with one of the traditional publishing houses, a sign that for better or for worse, the traditional publishing model still has the social cachet and sales to dominate the industry.