LibraryLaw Blog

Issues concerning libraries and the law - with latitude to discuss any other interesting issues Note: Not legal advice - just a dangerous mix of thoughts and information. Brought to you by Mary Minow, J.D., A.M.L.S. [California, U.S.] and Peter Hirtle, M.A., M.L.S. Follow us on twitter @librarylaw LibraryLaw.com

  • Home
  • Archives
  • Profile
  • Subscribe

Was CCC formed "at the suggestion of Congress"?

As I was reading Roy Kaufmann’s testimony on behalf of the Copyright Clearance Center (CCC) at the recent Congressional hearing on "Copyright Issues in Education and for the Visually Impaired," I was struck by CCC’s boilerplate statement that it “was created at the suggestion of Congress in order to help clear photocopy permissions.”  You can find this in other places.  For example, CCC claimed in its response to a 2012 "Notice of Inquiry Concerning Orphan Works and Mass Digitization" that “CCC was created at the suggestion of Congress in the legislative history of the Copyright Act of 1976.”  I haven’t been able to pin down the earliest use of this assertion, but it appears to date from the late 1980s. 

It is easy to understand why CCC would want to claim it has a Congressional mandate for its programs.  But is the claim accurate?  Did Congress suggest that CCC be created?

The answer is unequivocally "no."  A Congressional committee did suggest that it thought a voluntary licensing scheme would be desirable, but it never suggested that CCC was the form that such a scheme should take.

A good starting point is S. Rept. 94-473, the report in 1975 accompanying S. 22, which eventually became the 1976 Copyright Act.  The Judiciary Committee has this to say on p. 70-71:

The committee therefore recommends that representatives of authors, book and periodical publishers and other owners of copyrighted material meet with the library community to formulate photocopying guidelines to assist library patrons and employees.  Concerning library photocopying practices not authorized by this legislation, the committee recommends that workable clearance and licensing procedures be developed.

The Judiciary Committee was echoing similar sentiments that had been repeatedly echoed throughout the years of copyright revision.  One example: in 1967, the House report on the proposed copyright law noted on p. 33:

Recognizing that our discussion in this report is no final answer to a problem of shifting dimensions, we urge that those affected join together in an effort to establish a continuing understanding as to what constitutes mutually acceptable practices, and to work out means by which permissions for uses beyond fair use can be obtained easily, quickly, and at reasonable fees. Various proposals for some type of Government regulation over fair use and educational reproductions have been discussed since the hearings, but the committee believes that workable voluntary arrangements are distinctly preferable.

There is no question, then, that some in Congress hoped that "workable clearance and licensing procedures" could be developed.  Publishers responded by forming CCC.  But to imply that CCC is an embodiment of Congressional desire goes too far.  There is little evidence, for example, that CCC represents the collaborative, mutually-beneficial solution recommended by the Senate committee in 1975.  CCC's existence as a creature of publishers' interests was highlighted by the court that denied CCC tax-exempt status in 1982:

We are not faced here with a truly joint undertaking of all partie--publishers, copyright owners, users, and governmental agency--concerned with proper enforcement of the copyright laws, in which efforts are focused on meeting the needs and objectives of all involved. Instead, petitioner was organized by a segment of a publishers' trade group, the Technical, Scientific, and Medical division of the AAP, and there is little persuasive evidence that petitioner's founders had interests of any substance beyond the creation of a device to protect their copyright ownership and collect license fees. 

Furthermore, we have no reason to believe that the Senate committee would have endorsed many of CCC's current activities.  Would Congress, for example, conclude that the suite of licenses offered by CCC represents "workable clearance and licensing procedures"?  And would the committee conclude that litigation on behalf of its publisher clients is an appropriate licensing activity, one that is a "mutually acceptable practice" to all parties?  (CCC is funding the litigation against the faculty and staff of Georgia State University that deceptively argues that educational library reserve use is somehow equivalent to coursepacks sold by commercial vendors.)  Even in 1975, the Senate committee was aware that library use was changing and that copyright practices would consequently have to adjust, noting that even while adopting the provisions in the 1976 Act, "the committee is aware that ... there will be a significant evolution in the functioning and services of libraries" that may "necessitate the need for changes in copyright law and procedures."

Finally, did the Senate envision an organization whose finances and administration are as secret as CCC's?  Because it lost its tax-exempt status, CCC is not required to file IRS Form 990, which can reveal so much about the operation of tax-exempt non-profits.  No detailed figures on CCC's operations are posted to its web site; we have no data on how much money is remitted to authors (as opposed to publishers); and we do not know if that permission revenue is an important incentive to authors for the creation of new works.

So let's stop agreeing that CCC was formed "at the suggestion of Congress."  At best, it is the publisher's response to a suggestion of Congress.  Perhaps the best description of CCC is found in the CONTU final report: “a nonprofit New York corporation created under the sponsorship of publisher and author organizations.” 

Comments (0)

Tags: copyright, Copyright Clearance Center, licensing

What the University of Arkansas controversy can teach us about archival permission practices

(By Peter Hirtle)

By now most archivists and many librarians will have heard something about the controversy concerning the use of material found in Special Collections at the University of Arkansas.  Researchers from the Washington Free Beacon (WFB) web site requested and received copies of audio tapes found in a collection.  It published some of those audiotapes online.  It did not, however, first seek permission to publish the items, as library policy requires.  Its reporters' access to special collections was therefore suspended (“banned,” in the words of the site).  You can find online an overview of the controversy.  The WFB’s initial coverage is here and here; the response from the University of Arkansas is found here.  Coverage that includes some of the documents discussed below can be found on Business Insider.

All archivists would agree that researchers who do not follow the policies to which they have agreed can be kept from the reading room.  For example, a reporter who insists on using a fountain pen when a repository required that only pencils be used (in order to protect original material) should have her access suspended.  I don’t have a problem with Arkansas suspending WFB’s research privileges.

The controversy does, however, raise interesting issues about archival practice relating to reproductions and permissions.  As the University of Arkansas repeatedly notes, it policies “are the same policies and procedures followed in innumerable academic libraries across these United States."1 A detailed analysis of Arkansas’s procedures can therefore shed light on a  common archival practice. The controversy has brought to light source material that make such an analysis possible.  While Arkansas’s documentation of its policies was admirably complete prior to the controversy, the subsequent articles, letters, and emails have fleshed out the justification for its permission to publish policy.  There is therefore a rich trove of source material from which to work. One newspaper article cited emails that suggest that Arkansas itself is engaged in a review of its permission practices.  This discussion may therefore be of benefit to that review.

The Arkansas case study demonstrates that archival “permission to publish” is a practice that is both poorly understood and which can be detrimental to the donor, the repository, and the researcher. Following this standard archival procedure, as the University of Arkansas suggests, is not "good business practice...[that] makes operations run smoothly."  It is time for repositories to get out of the "permission to publish" game and leave permissions to the copyright owner.

“Intellectual property rights”

The University of Arkansas requires that patrons complete a permission to publish form before publishing material from its collection.  On what basis is that demand made?  The letter of 17 June 20014 from Dean of Libraries Carolyn Henderson Allen to Matthew Continetti of the WFB suggests one possible justification.  It demanded that the web site “cease and desist your ongoing violation of the intellectual property rights of the University of Arkansas with regard to your unauthorized publication of audio recordings…” 

University of Arkansas letter to WFB

 

One of the exclusive rights of a copyright owner is the right to control the reproduction and distribution of a work.  If the university owns the intellectual property in the recordings, it would be free, as is any copyright owner, to require prior permission to reproduce them.  Furthermore, it would be free to demand that the web site cease further distribution of the work if its copyright has been infringed. It could even issue a DMCA takedown notice to either the web site or to the ISP that hosts it.  It might make sense for the university to have a standard form to request permission to publish material whose copyright is owned by the University of Arkansas and for which permission is needed, i.e., when the proposed use is not a fair use. (One would hope that a university copyright owner would not demand permission for uses that would otherwise be fair.)

In this case, though, there is no indication that the University of Arkansas holds the copyright in the recordings.  The finding aid to the collection is silent on copyright ownership.  Subsequent news reports revealed that the deed of gift for the collection was missing.3 The university has since acknowledged that Roy Reed owns the copyright in the material.4  And Reed is quoted as being unsure about copyright ownership, speculating that it might belong to Esquire magazine, for whom he conducted the interviews.5

WFB challenged the library’s assertion of intellectual property rights in its lawyer’s response of 19 June 2014.  The letter notes that “any reference to a claim based on your ‘intellectual property’ is patently frivolous.” It seems as if the university may agree; it has made no subsequent assertion about infringement of "intellectual property rights."

Acting as if you own intellectual property rights in content when you don’t own the copyright is a form of copyright misuse that Jason Mazzone has labeled as “copyfraud.”  Copyfraud in an archival repository is especially pernicious because it works against the interests of the real owners of copyright.  If one can secure permission from a repository that claims it has intellectual property rights in material in its collection, a researcher may assume that there is no need to seek further permission from the real copyright owner, regardless of what the repository may say.  Who could imagine that there are two owners of "intellectual property" in an object whose permission must be sought?  Most of all, if the repository is confused about the nature and scope of its rights in a collection, what hope is there that the researcher will get it right?

Reading Room Rules and Regulations

As is the case with much material found in a repository, the university does not have an intellectual property interest in it.  What it does have is physical ownership of the collection.  And it can use its ownership of the physical material to impose quasi-copyright-like permission restrictions on the material.  It does this via its contractual agreement with the researcher.

To gain access to the reading room (or to be sent reproductions by mail), one must first complete an “Application for Research Privileges.” In that application, one must “agree to abide by the rules and procedures of Special Collections as set forth in its Reading Room Regulations."  The Reading Room Regulations specify that "Publication of any material found in the manuscript collections of the University of Arkansas Libraries Special Collections is permitted only after a completed 'Permission to Publish Request' is approved and signed by the Head of the Department."  

Note that the agreement specifies that permission is required for "any material" in the collections, not just copyrighted material. Nor is it restricted to material whose copyright is held by the University.  That is an indication that this is a regulation that is based on physical ownership rather than copyright.

Limitations of Contract

The ability of repositories to restrict “downstream” use of material through contract rather than copyright is sharply limited.  For example, it seems that the copies of the audio tapes at issue in the Arkansas controversy were made for an independent researcher, Shawn Reinschmiedt.  Reinschmiedt appears to have been working for the WFB, which would suggest that the WFB is bound by agreements that its agent signed. In addition, the WFB reporter is reported to have worked directly with Reinschmiedt in the reading room selecting the material for duplication.

But imagine that Reinschmiedt was a purely independent actor.  After uncovering the tapes and securing copies, he decided to turn copies over to a news agency for its review.  That agency then elected to publish the tapes.  The news agency in that scenario signed no agreement with the University of Arkansas and has no obligation to follow its policies and procedures.  Unlike copyright, which everyone must follow, a contractual agreement is binding only on the party that comes into the Arkansas reading room.

Permission to Publish

Permission to publish forms must be functionally important to repositories if they are willing to risk confusing researchers about the repository’s rights in material and potentially engage in copyfraud.  Censorship does not appear to be one of Arkansas's justifications. There is no discussion its site of situations in which permission might be denied, and the university reports that it has "never denied a permission to publish for a patron." The only thing that seems to matter is that the “permission to publish” form is signed.  So what is found in this form that is so important? 

It is difficult to understand why the university insists on the use of the form.  It  requires just two things of signatories:

  1. It requires that one "cite the source of the material as described in the Special Collections 'Citation Guide.'"
  2. The requestor must arrange to have "the publisher send a gratis copy of the publication to Special Collections."  Requiring payment - in this case a copy of the publication - may raise liability issues for the repository, as is discussed below.  It also contradicts what is required by the Application for Research Privileges, which simply states that "Special Collections requests a copy of the book or article should your research here result in a publication. If a copy is unavailable, please provide us with a bibliographical citation and a copy of each page on which material from this department is cited."  Which approach is binding: the request in the application that is signed by the researcher, or the reference in the researcher application to the Reading Room regulations? The latter requires that a permission to publish form be completed, and that form requires, not requests, that a free copy of a book be sent.

The applicability of the permission to publish form is also weirdly restricted:

I understand that this permission will be valid only insofar as the University of Arkansas, as owner or custodian, holds rights in the material, and does not remove the responsibility of the author, editor, and publisher to guard against infringement of any rights, including copyright, that may be held by others.

What do they mean when they say that the permission "will be valid" only when the University of Arkansas owns rights in the material?  In the Reed papers case that is at the heart of the controversy, the University does not hold copyright in the material.  That means that the WFB’s access was suspended because it did not sign a document that the University says would not be valid even if it had been signed.  I think what Arkansas is trying to say is "if we own the copyright, you have our permission.  But if we don't own the copyright, you need to get the permission of the copyright owner - and you still need our permission." But the form is unclear.

Justifications for the Permission Requirement

What justification could there be for having a permission requirement? In a letter to the Boston Globe, Laura Jacobs, the associate vice chancellor of university relations at the University of Arkansas, cited three reasons why requesting permission to publish is important:

  1. It "is important to record keeping."  What record-keeping is involved and why that record-keeping is important, is unclear.  I can't think of any.
  2. It can trigger "a conversation between the library and a researcher about potential copyright infringement."  The Citation Guide does that already, with a long discussion of copyright at its start.  The photocopy request form stipulates that "I understand that I am responsible for complying with the laws governing copyright and literary property rights."  The scanning request form goes further, stipulating that "I understand that I must also obtain permission from the original photographer/creator of each item."  Whether further discussion could place the repository itself at risk is discussed below.
  3. It "allows a library to track the use of its material."  At first glance this is desirable, but in practice makes little sense.  For example, there are probably hundreds of web sites that have cited, quoted from, or reproduced the Roy Reed material since it was published by the WFB.  Even if the WFB had completed the permission form, there is nothing in the form that would allow the University of Arkansas to track this third party use.  And tracking appearances of the use of material has been a limited priority for libraries.  For example, according to a WordCat study, only 130,000 records out of the 300+ million Worldcat records use the 581 field that tracks "publications about the described material."

A subsequent letter from Allen and Nutt expands on the tracking argument: "[W]e also want to track use and keep the donors of our collections apprised of how their papers have been used. It’s good customer relations."6 I am going to assume that Arkansas would meet this goal in a way that protects patron confidentiality.  But might not a simple request that states that the library would like to be informed about publications be enough, as the application for research privileges form requests?

Library as Copyright Police

The Allen and Nutt letter also hints at yet another reason for the permission to publish form when it introduces the tracking discussion with this odd phrasing: "Disregarding any legal onus we might have to protect copyright..."  Are they disregarding this because it doesn't exist (but then why bring it up?)?  Or are they hinting that the library does has a legal responsibility to serve as the "copyright police" and is legally obligated to prevent copyright infringement?

This would be a dangerous position for any library to argue.  In effect, it is arguing that the library may have legal liability for secondary infringement for the actions of our patrons.  The library profession has a long history of resisting calls to serve as "copyright police." To suggest otherwise only increases, not decreases, the potential legal liability of libraries.

An Unstated Justification: Revenue?

At many institutions, permission to publish is only granted upon the payment of permission fees.  While not explicitly tied to the “permission to publish” form, Arkansas does hold out the possibility that it might charge users who wish to publish material from the collection in its “Publication Fees” schedule:

Publication of images from holdings in the Special Collections of the University of Arkansas Libraries requires permission from the department as well as from the holder(s) of copyright. Fees may be charged for such use. Publication includes print media, audio‐visual media, broadcast media, web sites, exhibits and displays, or any other form of distribution. These fees are separate from any which might be assessed by the copyright holder.

To its credit, it does not appear that that Arkansas has criticized and/or suspended WFB for failure to pay fees for publication.  I have found no discussion of publishing permission fees in the documents about the controversy.  But I also have not found any documentation about when the university would elect to charge such fees and when it waives them.

The Repository’s Potential Liability for Reproductions

The issue of whether the university requires payment in return for its permission to publish, either through the requirement for a gratis copy of the work or via payments of fees has implications with regard to the repository’s own liability.  When a repository makes a copy of a copyrighted work for a researcher, without the permission of the copyright owner, it has potentially infringed on the copyright owner’s exclusive rights of reproduction and distribution.  The copyright owner could bring legal action against the repository. 

There are two possible defenses that the repository could invoke.  First, it could argue that the copy it made was exempt from damages by Section 108(e) of the Copyright Act.7  This section allows a library or archives to make a copy of an entire copyrighted work if certain conditions are met.  For example, a copy cannot be available for sale at a reasonable price; the request for the copy must be made on a form with language specified by the Librarian of Congress; the library or archives must have no knowledge that will be used for anything other than private study, scholarship, or research; and the copy must become the property of the requestor. 

Section 108 also stipulates that the copy not be made for purposes of “direct or indirect commercial advantage:” 17 U.S.C. § 108(a)(1). A library that charges publication fees is arguably accruing direct commercial advantage from its reproduction.  If a copyright owner elected to bring an infringement suit against the repository for making a copy, it is unlikely the repository would be able to use 108 as a defense. 

There is another way in which the use of a “permission to publish” form could increase the repository’s potential liability.  Remember that one of the justifications for the use of the permission to publish form is that it can spark "a conversation between the library and a researcher about potential copyright infringement." Section 108, however, stipulates that the library have no knowledge that the copy is being made for anything other than “private study, scholarship, or research.”  As soon as the repository learns that the patron intends to publish a copyrighted work, its 108 privileges disappear. 

It would seem that Arkansas wants to use 108(e) to make copies for users.  The copyright notice required by the section appears on its reproduction order forms.  Furthermore, the university requires that users stipulate on the order form that the copies are for “my exclusive use, for the sole purpose of research or study convenience.”  Yet as noted above, the payment of publication fees would likely negate a 108(e) defense. Furthermore, the discovery that copies might be used for something other than personal research would also remove the section as a defense.  Lastly, Section 108 requires that any copies made “become the property of the user.”  The Arkansas reproduction order form, however, states that no copies may be deposited in other repositories.  This attempt to limit the ownership rights of the patron may destroy as well the university’s 108 protections.  In short, the manner in which Arkansas, and by extension most other archival repositories, implement its reproduction and permission policies may make it ineligible for 108 protections in the event of any suit for direct or indirect copyright infringement.

Of course, the repository could always rely on “fair use” as a justification for its copying.  But even with fair use, reproductions that are made for commercial advantage are arguably less likely to be found to be fair.  Furthermore, in  at least one court decision, the ruling on whether a repository making a copy of an item for a researcher was in part dependent on whether that researcher’s use was fair.  Consulting with researchers about their publication plans puts the repository in the unenviable position of having to assess whether any individual researcher’s use is fair.  The consultation (and any required publication fees) increases the likelihood that the any reproductions the repository may make for a researcher exposes it to damages for secondary liability (both contributory and vicarious). 

Conclusion

A close examination of the “permission to publish” policies of one typical institution demonstrates that they make little or no legal or policy sense.  They can confuse researchers (and library staff) about the nature of the repository’s rights in the material.  They can place the repository in the unenviable and unsustainable position of having to assess the legality of the researcher’s proposed use.  Requirements for compensation (either directly or in the form of complimentary copies of publications) may negate the repository’s normal defenses against a charge of copyright infringement for its copying. 

A better approach would be to drop any requirement that researchers secure the repository’s permission prior to publication.  The repository would instead provide researchers with copies for private study, scholarship, or research.  If a researcher wished to use a provided copy for publication, it should be the researcher’s responsibility to determine if her use is a fair use or if permission of the copyright owner is needed.  In some cases, the repository might be the copyright owner and so the researcher would ask the repository for permission to publish. But that should be the only time that repositories are involved with permissions.

This does not mean that the repository should not educate users about potential rights issues in collections.  It should make sure that the researcher knows that she is responsible for securing all needed copyright permissions.  It should also make it clear that the user should not infringe on the privacy or publicity rights of any subject found in the collection.  But this is best handled with disclaimers and with education, and not via “permissions.”

1. Carolyn Henderson Allen and Timothy Nutt, "Rules same for all," Arkansas Democrat-Gazette, 5 July 2014, p. 9B: ""...a permission-to-publish form must be completed. The form is a standard procedure for academic libraries."

2. Ibid.

3. Jaime Adame, "UA loses deed on journalist’s donated files," Arkansas Democrat-Gazette, 3 July 2014, p. 2B.

4. Allen and Nutt, "Rules."

5. Adame, "UA loses deed."  The issue of who owns the copyright in an interview is a fascinating one worthy of its own blog posting.  In this case, no one is talking about possible copyright ownership interest of the interviewee, Hilary Rodham Clinton.

6. Allen and Nutt, "Rules."

7. Because the audio tapes at issue were made after 15 February 1972, they are subject to federal copyright protection and hence also the 108 exceptions.  And since the tapes do not embody one of the media formats excluded from most of 108 by 108(i), copies of the tapes could be made using 108(e).

Comments (1)

The New Handbook of the Public Domain: Review

[UPDATE: That didn't take long.  The authors of the handbook have responded to my specific issues below by updating and/or correcting the handbook.  A new version is available at http://www.law.berkeley.edu/files/FINAL_PublicDomain_Handbook_FINAL(1).pdf.  A very good resource has become even better.]

 

It is very difficult to determine whether works are in the public domain in the United States.  That is why I had to create my duration chart as an aide-mémoire: any time I tried to remember the various options, I got them wrong.  It is also why I felt compelled to write an article highlighting some of the traps lurking within the seeming clear-cut categories.  And it is why Stephen Fishman needs 700+ pages in his legal treatise, Copyright and The Public Domain.

publicdomain

Public Domain Logo by Creative Commons / CC BY

 

Now the good folks at the Samuelson Law, Technology & Public Policy Clinic at the University of California, Berkeley, School of Law have stepped into fray with a new publication. Is it in the Public Domain? is described as "A handbook for evaluating the copyright status of a work created in the United States between January 1, 1923 and December 31, 1977." It consists of a set of questions and charts to help readers determine the public domain status of works created under the Copyright Act of 1909. The obvious question is "How good is it?"

The answer has to be “not bad,” especially when one takes into account the complexity of the legal environment for these works.  For example, it is not enough to talk just about the 1909 Act.  A work created before 1978 but first published after that date is governed by the rules of the 1976 Copyright Act, as amended.  That means the section on duration must account for the different rules that govern five different publication periods.  The handbook does that nicely.  The chart on p. 26 that explains copyright notice requirements for different kinds of material is the clearest that I have seen; I expect to use it frequently. The “tips,” “traps,” and “special cases” that are highlighted throughout the text are particularly valuable.

As one would expect, however, with such a complex area of the law, there are elements that are left out or glided over.  The handbook is an excellent resource if the material you are examining is relatively straightforward, but the handbook does not elucidate the copyright status of all works created in the U.S. before 1978.

Perhaps the most problematic part of the handbook is Chapter 1 on Subject Matter. The handbook notes that "Nearly all works created between January 1, 1923 and December 31, 1977 will qualify as valid subject matter." It then lists the categories of material that could be registered for copyright.  I think they are trying to establish the important point that while unpublished works in general were not protected under the 1909 Act, certain unpublished works - primarily those that might be performed in public - could be registered and receive federal copyright protection.  Those works have a different duration than other unpublished works that were never registered.

I worry, though, that some might read the list of registration categories as an absolute list of what could be protected by copyright. Both the 1909 list and a similar list in the 1976 Act are illustrative, not exclusionary.  Works that fall outside of the list could still be protected by copyright.  The handbook authors know this. In their flow chart, they note that a “yes” response to the question of whether a work corresponds to one of the listed subject categories means that the work could be protected by copyright, but they do not show a corresponding “no” response as injecting the work into the public domain.  In addition, there is no discussion of how broadly the categories could be interpreted.  Software, for example, can be protected as “a writing of an author.”  Advertisements are protected as literary or graphic works, even though they are not in the subject list.  Finally, in light of the decision in Bridgeman v. Corel, there is some question as to whether copyright can exist in one of the categories specified in the 1909 Act: reproductions of works of art.

Many have found it easier to discuss what cannot be protected by copyright rather than try to define what is included.  I think such an approach would help this handbook.  One would learn, for example, that works of the Federal government are in the public domain, an exception that is missing from the handbook.  The same with court decisions, recipes, typeface designs, works of architecture, and useful articles (some of which are mentioned in passing).

An even larger problem arises from the handbook’s failure to discuss adequately issues that arise from the nationality of the creator and the place of publication.  I have argued that the law that restored copyright to foreign works has made it almost impossible to determine with certainty the copyright status of many U.S. works.  The handbook in its title and first few pages indicates that it governs works “created” in the U.S. but then fails to develop the theme.  But a work created in the U.S. by a foreign national who then publishes it abroad would not be subject to the flow charts in the handbook.  The handbook’s analysis does a good job with unpublished works created in the U.S. and published works that are first published in the U.S. (regardless of place of creation).  But there are traps here for the unwary, and more explication would have been good.

The handbook’s failure to acknowledge the monkey wrench thrown into the mix by copyright restoration also leads to some blanket statements that could be misinterpreted.  For example, we learn on p. 8 that “Sound recordings created before February 15, 1972 are protected by a patchwork of state laws." Copyright in foreign sound recordings made after 1923 was “restored” by 17 U.S.C. § 104A. One could imagine a recording made by Vladimir Horowitz in the U.S. before 1944 that was only released by a European record label; that recording would likely be protected as a foreign work. 

In comparison to problems with subject matter, the other mistakes in the handbook are minor.  One of the periods discussed under general publication is “Between March 1, 1989 and January 1, 2002." It should read "January 1, 2003." In several spots, the handbook mentions that Copyright Office records can be searched in the Copyright Office or one can hire the office to do it for you.  It might have been good to discuss as well the use of the Catalog of Copyright Entries and the offshoots from it such as the Stanford Copyright Renewal database. The handbook’s discussion of what constitutes an acceptable copyright notice omits the use of ℗ ("P" in a circle), the phonogram symbol used with post-1972 sound recordings (though it is included in the excellent chart on notice requirements). On p. 35, the handbook states that “some states have decided to grant protection until 2067." I was aware that California protects sound recordings through 2047, but I was not aware of any other states that had temporal protection limits.  I would have liked to have seen a footnote.  Finally, the text tells us that “this Handbook is only accurate up to the date it was published—August 10, 2012.”  However, the cover carries a publication date of January, 2014 and it was announced on 27 May 2014.  I can’t think of any significant changes to the law since 2012, but it would nice to know for sure what is its “sell-by” date.

These are minor quibbles, however.  For anyone who has a work that is clearly a US work (i.e., a U.S. author living in the U.S. and, if published, published first in the U.S.) would be well-served by this guide.  It won’t identify all works that are in the public domain (for example, U.S. government works).  Nor does it discuss divestive public display (i.e., without restrictions on copying) that likely injected many works of art into the public domain.  And it won’t guarantee that someone who thinks that they have rights in a work might not object strenuously to a contrary interpretation.  As the introduction to the handbook notes:

It does not describe how the law might apply to any specific work. It is not a complete discussion of all legal issues that may arise when deciding whether or how to use a work, nor is it a substitute for legal advice. Further, two courts may reach different conclusions about the copyright status of a work based on the same set of facts. Accordingly, using this Handbook does not guarantee the accuracy of any assessment of copyright status with respect to an individual work, and does not shield you from liability for copyright infringement...Further, even if a work is in the public domain with regard to copyright, using it may raise legal concerns outside of copyright, such as concerns related to privacy rights or contractual restrictions on the work’s use. This Handbook does not cover any of these other legal issues.

But within the context of these reasonable caveats, the handbook has met its goal. I will be sure to add it to my list of recommended resources.

Comments (0)

Tags: public domain

Norway, Extended Collective Licensing, and Orphan Works

(by Peter Hirtle)

[UPDATE below]

On 10-11 March, the Copyright Office sponsored a roundtable on the problem of orphan works: works protected by copyright whose authors cannot be located.  I didn't attend, but you can find summaries of the discussion here, here, and here.  Written comments on the issue are due to the Copyright Office by 14 April.

More on orphan worksOne of the major topics under discussion was Extended Collective Licensing (ECL) and its possible application to the mass digitization of orphan works.  This reminded me of the recent flurry of articles and posts about changes to the Norwegian National Library's use of an ECL.  I wrote in 2011 about Norway's first experiment with a library-funded ECL and its potential as an alternative to the proposed amended Google Books settlement.  On 28 August 2012 the Library signed a new agreement with Kopinor, the Norwegian organization that represents many authors and publishers.  The change seemed to have gone unnoticed in the West, though, until Atlantic author Alexis Madrigal wrote a short piece in December, 2013 on how Norway planned to digitize and make available "all Norwegian books."  A 16 January AFP article on the project was picked up and discussed in the Telegraph, and that sparked a flurry of other articles and blog postings.  It may have been because the Telegraph article had the irresistible title, "Books go online for free in Norway."  It also led to a lot of hand-wringing over why the US wasn't digitizing and making all of its books available for free.

What is new and what is the same since the 2010 agreement?  To answer that question, one must actually read the new agreement, which is available in html or as a pdf.  

The biggest similarity is in the use of the material.  Users may still not download or print from a book in the system; it is for online reading only.

As for differences, there are several:

  • The number of books in the project has increased dramatically.  The pilot agreement was for 50,000 books with only one decade (1990-1999) of 20th century books included.  The new agreement covers all books published in Norway (including translations from other languages) until 2001. The estimate is that this could be 250,000 books.
  • The original agreement only allowed for one page to be viewed at a time.  The new agreement allows the books to be presented in the format used at the National Library.
  • Both agreements allow users in the Norwegian IP domain to access books, but the new agreement holds out the possibility that external researchers with distinct research projects could consult the corpus.

The most interesting change is in the pricing.  The 2010 agreement required that the National Library pay annually NOK 0.56 (about 10 cents) for every page "made available" (not read).  That price has dropped dramatically in the new agreement.  The new fee was NOK 0.36 in 2013, and it drops to NOK 0.33 in 2015 and subsequent years.  That is 6 to 5 cents in USD.  Of course, given the dramatic increase in the number of volumes, the total amount being paid to Kopinor is considerable.  There is no estimate of pages per volume in the new agreement, but if we use that from the 2010 contract (185 pages), the Norwegian national library will be paying over $2.3 million/year to allow people to access and read Norwegian books online.

Is this excessive? The answer might depend on how often the books are read.  By comparison, the UK is distributing over $11 million to authors through its Public Lending Right, but that works out to only $0.10 per loan. 

It is more interesting to think about how the Norwegian model would work in the US.  Let's assume that we changed our laws so that one agency could represent both members and non-member authors and publishers.  The Library of Congress then signed the same agreement as the National Library of Norway did.  How much would it cost the government?

To answer that question, one needs to know how many books that are still protected by copyright were published in the US.  Unfortunately, there is no easy way to compile the number of books protected by copyright from the Copyright Office's annual reports.  We could use, however, Brian Lavoie and Lorcan Dempsey's estimate that there have been 12.5 million books published in the US since 1923.  We can subtract 60% of the titles published between 1923-1964; the University of Michigan's Copyright Review Management System has found that roughly only 40% of copyright works were renewed and hence still protected by copyright.  So let's say that there are 11 million titles in our pool of US works.  If we use the same estimate of 185 pages per work and assume the $0.05/page royalty rate, it would cost a little over $100 million/year to provide online access to read (but not download or print) American works.  (And note that this only considers royalty payments, and not the cost of digitizing and delivering books.)

Would Congress increase the Library's budget by almost 1/4 in order to provide this level of access?  One plus is that the Norwegian approach is not limited to orphan works.  All works are accessible under this plan unless authors expressly withdraw titles from the scheme.  No money is spent on "diligent searches" to locate copyright owners.  According to the head of the National Library in Norway, Moe Skarstein, "Instead of spending our money on trying to find the copyright holders, we prefer to give it to them." 

There is something that seems unfair about having to pay to read a book that is an "orphan."  Any revenue that work generates will not go to the copyright owners, since they are unknown.  But according to the Copyright Clearance Center's annual report, it collected over $270 million in permission fees in 2013.  Some of that would still have to be paid by users who wanted to print or download copies of items, but many other readers might be happy with simple online access. Thanks to the ECL, they would be able to eschew payment. 

An extended collective license might be an acceptable solution to the orphan works problem - but only if all works, and not just orphans, were included.

UPDATE: I realized after posting this that I had not accounted for the difference in population size between Norway and the US.  The US is  62.54 times larger than Norway; it would make sense that rights owners would want to increase the license to reflect the larger potential reading audience. That means the annual license fee for US books could be $6.254 billion.  The Norwegian ECL is looking less and less like an option.

 

 

 

Comments (0)

Update on a Legal Action Against a Cultural Institution

Back in July of 2010 I reported on the lawsuits brought by the photographer Anne Pearse-Hocker against the National Museum of the American Indian, for making copies of her copyrighted photographs, and against Firelight Media, which used the photographs in its documentary film about Wounded Knee. I noted that it was one of the rare lawsuits against a cultural heritage institution and ended by noting that “This case should be interesting to follow – if it is not settled out of court.”

Thanks to the wonderful people at Justia, it has been possible to follow the court filings in the case. And as I suspected, both cases settled before proceeding to trial. Still, there is plenty of interesting hints in the documents about how the story played out.

The case of Pearse-Hocker v. Firelight Media, Inc. settled first, on 14 October 2010. The “Stipulation of Dismissal” says nothing about the terms, merely that each side is responsible for its own legal fees. An earlier document, though, suggests that the parties had “reached agreement as to the monetary term of settlement.” One would love to know whether the amount of settlement was symbolic or substantial.

In its initial answer to the complaint, Firelight admitted that

…the photograph numbered N44622 is shown at approximately minute 63 of the film for a duration of approximately seven seconds, that the photograph numbered N44926 is shown at approximately minute 64 of the film for a duration of approximately 16 seconds, and that the photograph numbered N45215 is show at approximately minute 65 of the film for a duration of approximately 7 seconds. 

But it also asserted among its defenses that this use was a fair use.  Whether the “Documentary Filmmaker’s Statement of Best Practices in Fair Use” would agree or not is open to discussion. In any event, Firelight Media now acknowledges on the movie’s web site that three uncredited photos by Anne Pearse-Hocker appear in the film.

Pearse-HockerThe judgment in the other case, Pearse-Hocker v. United States, which was entered in June, 2011, is more informative. The museum (which as part of the Smithsonian is a unit of the US government – hence “United States” as the defendant) agreed to pay Pearse-Hocker $40,000.  In addition, they had to provide her with a digital copy of the 15 photo contact sheets in the collection, from which she could select 100 images to be provided to her at high-resolution.  This would normally cost an additional $7,500. Finally, the director of the museum, Kevin Glover, had to send Pearse-Hocker a letter acknowledging her generosity in donating the photos to the museum. The museum did not have to return the collection of photos to Pearse-Hocker, however, which was one of the demands in her original complaint.

The museum did not admit that it had violated any laws or contracts, but it is hard to determine what defense it might have used if the case had proceeded to trial. Its pro forma response to the initial complaint hinted that it would have argued that Firelight Media’s use was a fair use and that it had a license from Pearse-Hocker to copy the material for Firelight.

What lessons can a cultural heritage repository take away from this case? First and foremost, it emphasizes the need to respect and follow the terms in a deed of gift. Sometimes deeds require practices and procedures that are outside of the ordinary, but that just means that our workflows have to be such that anomalous items are consistently identified.

Second, we should make sure that the terms in the deed are as clear as possible. Pearse-Hocker’s Deed of Gift (Exhibit B of the original complaint) states “I hereby also assign and transfer all copyright that I possess to the National Museum of the American Indian, subject only to the conditions which may be specified below.” What conditions were specified below? “I do not, by this gift, transfer copyright in the photographs to the Smithsonian Institution”! Why have a deed with two conflicting sections in it?

In addition, the deed granted to the museum “an irrevocable, non-exclusive, royalty-free, license to use, reproduce, display, and publish, in all media, including electronic media and on-line, the photographs for all standard, educational, museum, and archival purposes.” Many would argue that providing copies for non-profit documentaries on PBS is part of the standard educational mission of the museum.  Yet this interpretation could be in conflict with the next sentence of the deed, which states that “requests by people or entities outside the Smithsonian to reproduce or publish the photographs shall be directed to the donor.” If the Smithsonian felt that only for-profit uses should be referred to the donor, it should have made this clear in the deed.

Third, this case reminds us that running a repository involves taking risks.  We run the risk that users might steal collection material or that dirty documents caked in lead dust or mold might injure staff or patrons. We particularly run risks when we duplicate materials for patrons. It is an essential part of our service, but one that needs to be managed by knowledgeable practice and procedures. One wonders, for example, if the museum may have weakened its own defenses by charging a permission fee that is separate from the cost of making the reproduction. Such fees are designed to generate money for the museum, pure and simple. They are unconnected to “standard, educational, museum, and archival purposes,” and hence could not be supported by even the most generous reading of the license grant in the contract. Could the desire to secure $150 in permission fees have cost the museum almost $50,000 in damages?

Lastly, I would reiterate the point I made in my original post. Since the case against Firelight Media did not get very far, we do not know what its fair use defense might have looked like. I continue to suspect, however, that Firelight, like most of our users, did not really understand the difference between the permission given by the repository and the permission it needed from the copyright owner.  And it may not have understood that both were needed for its use of the photographs. The museum's invoice stated that “[p]ermission is granted for the use of the following imagery, worldwide, all media rights for the life of the project.” By providing only one of the permissions that users need, we may in the end be misleading them.

As with most lawsuits, I suspect that this was a bad experience for everyone except the lawyers.  Pearse-Hocker will be lucky if her $40,000 cash payment covers her legal fees in the case.  The museum is out that same amount of money, as well as its time and expense in defending itself. Most of all, therefore, this case reminds us about the importance of working with donors so that a disagreement never reaches this stage.

Comments (1)

Book Review: Wilkin on Orphan Works

(by Peter Hirtle)

CLIR has inaugurated a new publication series called Ruminations, and for its first report, it has published an incredibly interesting and important report by John Wilkin.  In “Bibliographic Indeterminacy and the Scale of Problems and Opportunities of ‘Rights’ in Digital Collection Building,” Wilkin explores the status of copyrights in the 5+ million volumes that have been digitized and deposited with the Hathi Trust.  He provides hard data that builds on the earlier work of Brian Lavoie and Lorcan Dempsey on the nature of the WorldCat database and Michael Cairns on the number of orphans.

Wilkin’s work is a rich and nuanced piece that stimulates thoughts of broad importance as well as questions about the soundness of some of his assumptions.  Here are some of my initial thoughts, sparked by reading an early draft as well as the published product.

First, Wilkin’s analysis provides solid data on the scope of the orphan works issue (those works that protected by copyright but that for which a copyright owner cannot be located).  “Even before we are finished digitizing our collections,” Wilkin concludes, “the potential numbers are significant and surprising: more than 800,000 US orphans and nearly 2 million non-US orphans.” 

The size and scope of the orphan works problems was one of the subtexts in debate about the Google Books Amended Settlement Agreement (ASA), and critics of the settlement argued that the ASA would give Google an unconscionable monopoly over orphans.  Wilkin’s work, however, indicates that the universe of orphan works is much, much larger than the ASA would have made accessible.  His calculations do not distinguish registered and renewed US copyrighted works from unregistered U.S. titles; the ASA (unlike the original settlement agreement) would have only given Google the right to use registered orphan works.  That number would be far smaller than 800,000.  As for the foreign works, we don’t know how many of the 2 million titles Wilkin identifed as orphans were published in England, Canada, Australia, or New Zealand (the countries that would have been part of the ASA) and how many were published in the countries that were not included in the ASA.  It is likely, however, that only a small percentage of these two million would have been accessible via the ASA.  The major problem with the ASA, therefore, was not that it would give Google a monopoly over all orphan works, but rather that it would leave millions of titles still inaccessible.  The original settlement agreement was actually better in this regard, even though it had a slew of other problems. 

With the rejection of the ASA, even its partial solution to the orphan works issue is gone.  The legislation proposed by the Copyright Office to address the issue in its final report on the orphan works problem is no better.  Given the scope of the issue as identified by Wilkin, no mass digitization effort like the Hathi Trust’s could ever afford to engage in a title-by-title search for copyright owners.  There needs to be a third solution.  The Trust recently announced that it is are going to engage in a study on how to locate the owners of orphan works that should help further explicate the scope of the orphan works problem.

Second, Wilkin’s analysis is based on the 5+ million books secured in the Hathi Trust.  They represent a 31% (and growing) overlap with the holdings of ARL libraries.  It is pretty clear to me that Hathi has started building the Digital Public Library of America that others are talking about.  I am also curious as to how big that library will get, and whether the patterns Wilkin has identified will continue to hold as the collection grows.  Can we assume for example, that if 31% of academic library collections are duplicated in the 5 million volumes already in Hathi, then 16 million volumes will give us 100% overlap?  And will the number of orphans also increase threefold?

Third, the bulk of Wilkin’s essay is devoted to counting the status of works in the database.  He notes: “there is considerable nuance and some tricky exceptions to all of these rules, which I won't try to supply here.”  I live for nuance and tricky rules, so the rest of this too-long review is devoted to his assumptions.

  1. While admitting it is an oversimplification, Wilkin assumes that “all pre-1923 books are in the public domain.”  That is a pretty good assumption.  There is only one instance that I can think of where a pre-1923 book could still be protected by copyright (excluding the weird Twin Books decision on foreign books in the 9th Circuit).  If a pre-1923 book was published without the authority of the copyright owner, or included a work that was published without the authority of the copyright owner, that work might still be protected by copyright if the copyright owner published it after 1923.  Reproducing the pre-1923 work would infringe on the later copyright.  This is apparently the case with the song  “Happy Birthday.”  According to Robert Brauneis in his fascinating history of the “world’s most popular song,” the lyrics to Happy Birthday were reprinted many times starting in 1912, but the first authorized publication (and registration) occurred in 1935.  Distributing any of the earlier versions could infringe on the copyrights created in 1935.  But the number of cases of this must be tiny, so I can live with Wilkin’s assumption (and also the slight risk that it entails when digitizing a pre-1923 work).
  2. Wilkin also mentions the findings of Michigan’s Copyright Review Management System (CRMS), which has found that 55% of US works published between 1923 and 1964 are in the public domain.  Wilkin doesn’t link to the protocols that CRMS follows when investigating the copyright status of a work, but they can be found here, and are excellent.  For example, unlike other prominent digitization projects that look only at place and date of publication of the volume in hand, CRMS takes into account possible prior foreign publication of a title before its appearance in the US.  A book published in the US in 1935 and not renewed would appear in the Copyright Office records as being in the public domain, but if a version was first published in London in 1934, the work would still be protected in the U.S.  (I talk about this issue more here.)  The CRMS protocol requires that investigators look for evidence of prior foreign publication. 
  3. Wilkin posits this assumption: “For non-US works published between 1923-1963, roughly 20% will be in the public domain (e.g., because the author died before 1941, as would be the case for determining public domain status for works published in countries like the US that has a term of life plus 70 years).”  Here he misconstrues the operation of Section 104(a), which restored copyrights in most foreign works.  The key determination for most countries is whether the work was in the public domain in its home country as of 1 January 1998  1996.  So the question is how many works published between 1923 and 1998 1996 had authors who died before 1927 (in a life+70 country such as the Netherlands or Germany) or 1947 (in a life +50 country, such as Canada) – not 1941.  (One would also need to take into account local extensions, such as the additional copyright protection for authors of musical works who are “mort pour la France.”)  I think that more research on foreign copyrights would need to be done before we could assume that 20% of pre-1964 works (which by itself is date that only matters to US works, and hence odd to see used as a dividing line in this discussion of foreign works) are in the public domain.
  4. Wilkin suggests that for copyrighted works published between 1923 and 1963, “we will be able to contact only 10% of the authors, publishers, or heirs who hold rights.”  He hints that his estimate is based on the important work done by Denise Troll Covey, but I don't see how one can extrapolate any useful percentages from Covey's bar graphs. 

This last point strikes at the real heart of the problem with Wilkin's conclusions.  They are based on an assessment of how likely it is that we will be able to locate copyright owners.  Right now, that is all guesswork.  We have know no way of knowing if 20%, or 30%, or 50% of books in the Hathi Trust collection will turn out to be true orphans.  Wilkin's careful initial assessment of the nature of the collection, however, would suggest that he and his Michigan colleagues will soon replace his guesses with estimates based on actual investigations.

UPDATES, 29 June 2011: A misspelling in last paragraph has been corrected.  John Mark Ockerbloom was kind enough to point out that I had mistakenly used 1998 as the date of most copyright restorations.  The actual date as specified in 104A is of course 1 January 1996.  I have updated the text accordingly.

 

Comments (0)

The National Jukebox, Copyright, and pre-1972 Sound Recordings

The National Jukebox, the new collection of digitized pre-1925 recordings streamed from the Library of Congress, appears to have been a big success.  The LA Times reported that the website logged more than 1 million page views and 25,000 streams in the first 48 hours after it was announced.  There are many blog postings and Twitter streams that discuss unusual music and spoken recordings included in the 10,000 streams available.  All good news, right?

Among the kudos for the site, one stands out.  Michael Weinberg at Public Knowledge in a post on "The Bittersweet National Jukebox" admires the vast variety of recordings on the site, but also notes its dirty secret: namely that the recordings are not in the public domain, but are still claimed by Sony.

I’ve written in the past about the confused state of pre-1972 sound recordings and how many things that we think might be in the public domain (including Edison wax cylinders) may still be protected by state common law copyrights.  In this case, it would be easy to think that the recordings, most of which were made before 1923, would be in the public domain.  Certainly the sheet music, musical works, and spoken texts that are recorded have likely entered the public domain.  But the recordings themselves will remain protected by copyright until 2067 – even though they are in the public domain in most of the rest of the world, where a 50 year term for sound recordings is the norm.

The continued copyright protection of these recordings has one obvious impact on the National Jukebox site: one cannot download copies of the recordings.  In spite of the fact that it has had a minimum of 85 years to exploit these recordings, Sony has, according to the LA Times, retained the rights to continue to commercialize them.  Apparently anything that the Library of Congress wants to do to preserve these recordings must be done with the permission of Sony.

The potential danger that copyright law poses for the preservation of and access to our recorded cultural heritage led a consortium of archival and library organizations to request that the Copyright Office undertake a study to determine whether the public would benefit if sound recordings were brought under federal protection. The initial responses and the reply comments, both found here, make for fascinating reading.   The Association of Recorded Sound Collections, the Music Library Association, the Society of American Archivists, and the Library of Congress all came out strongly in favor of federalization.  While recognizing that the exemptions available in federal law are far from perfect, they all felt that professionals in their field would be more likely to act if they were operating within the known boundaries that federal copyright law provides.  Furthermore, the possibility that at least some sound recordings (including the sound recordings being made available through the National Jukebox) would enter the public domain would enrich our cultural vocabulary. 

The Recording Industry Association of America in its comments naturally opposed the suggestion.  No privileged rent-seeker will easily concede its monopolistic advantage, and the RIAA did not surprise anyone (except for not knowing its own name in the title of its submission - “Association” has been dropped).

More surprisingly, ARL and ALA joined the RIAA in opposition to new legislation, in spite of the fact that ALA had initially called for the study.  It is well worth reading their comments, but it might be grossly summarized as saying that any legislation is bound to make things worse rather than better.  They suggest it would be easier to get librarians and archivists to change their behavior and assume an aggressive stance on fair use than it would be to change the law. (The two organizations recently issued a similar statement as a reaction to the failure of the Google Books Settlement and the subsequent calls for a legislative remedy.  Legislation, they seem to suggest, is more likely to make things worse than better.)

The ARL/ALA position appeals to the pessimistic side of me.  I wonder, though, if any repository would have been willing to use fair use to justify the creation of the National Jukebox.  Should or could the Library of Congress have just ignored Sony and streamed the music on its own?

The Copyright Office is sponsoring a public meeting on 2 and 3 June in Washington to solicit more input, and then we will await their report.  It is hard to imagine that any legislation that is opposed by the RIAA will get far under this administration, but the Copyright Office earned everyone’s respect by the thoroughness and thoughtfulness of its report on orphan works.  Let’s hope the quality of that report has become the norm.

Comments (1)

Norway as a Model for a GBS Replacement?

Bookshelf signing
The Director of Kopinor, Yngve Slettholm, and the National Librarian of Norway, Vigdis Moe Skarstein, sign the Bookshelf agreement. Foto: Anne Tove Ørke

Commentary is flowing fast and furious on what the rejection of the Amended Settlement Agreement brokerd by the plaintiffs with Google means.  It is hard to imagine, however, a better analysis than that prepared by Jonathan Band for ARL (though if you want something written with more heat, try "To the whingers go the spoils in the Google book decision").

I've been curious as to what the settlement's critics would like to seen in its place.  Norway is frequently trotted out as an alternative approach.  In his discussion of the decision in the New York Review of Books, for example, Robert Darnton states that:

The most impressive attempts to create national digital libraries are taking shape in Norway and The Netherlands. They have state support, and they involve plans to digitize books covered by copyright, even those that are currently in print, by means of collective agreements—not legalistic devices like the class action suit employed by Google and its partners, but voluntary arrangements, which reconcile the interests of the authors and publishers who own the rights with those of readers who want access to everything in their national languages.

Similarly in his Op-Ed piece in the New York Times on "A Digital Library Better Than Google's," Darnton cites Norway as one of the countries that is "determined to out-Google Google by scanning the entire contents of their national libraries."

I wonder, though, if the people who cite Norway as a possible model have read the agreement between the Norwegian Digital Library and Kopinor, the Norwegian copyright licensing agency that administers extended collective copyright licenses.  If this is the alternative, then I fear that academia will rue the day that the settlement failed.  Both in its costs and functionality, the Norwegian model is far from "out-Googling Google."  Consider:

  • The Norwegian agreement requires that the Library pay roughly $0.10 per page per year for every title included in the collection (§ 7).  It estimates that books have on the average 185 pages (§ 1).  That means $18.50 a year per title per year, regardless of whether the books are ever claimed, consulted, or used.  It is difficult to compare this to the Google settlement, which had a much higher initial payment for scanning but then predicated future payments on actual use.  But since we know that it is likely that most of the books that Google scans will never be consulted, the argument can be made that the Norwegian model will prove to be much more expensive.
  • There are 50,000 books that are covered by the agreement (§ 1), for what is likely to be an annual payment of just under 1 million dollars.  Michael Cairns, in his study on the number of orphan works, cites Bowker estimates that 2 million books were published in the US between 1920 and 2000.  Does anyone expect the Library of Congress any time soon to start shelling out $37 million a year in license fees for access to mostly out-of-print books?
  • What you can do with the books is much more limited than under the Google settlement agreement.  You can only view the books one page at a time, which makes sense, since you can't download or print the contents (§ 4).  How useful is that?  With Google, you would have been able to view 20% of the book for free, and through an institutional subscription, have been able to download or print.
  • The Kopinor agreement is only for 3 years (§ 21), and if it is not renewed, the digitized books must go away (§ 13).  The Google agreement contained provisions that would have ensured that the digitized books in the Google product would still be accessible, even if Google decided to leave the book business entirely. 
  • The Norwegian agreement was possible because Norway already had a collective rights management organization, Kopinor.  The Google agreement would have created such a group for the US: the Books Rights Registry.  Without the settlement, there is no one left to front the millions it would take to create such a group.

Maybe it would be a better solution to have the Federal government pay whatever fees rights owners demand in order to have their books be included in a national digital library database (though I would hope that range of uses would be greater than Norway allows).  I would think, though, that securing national licenses for current academic and research journal collections would be even a higher national priority.  National access to current research would be an efficient and effective method of maintaing excellence in research and improving national productivity.  There are, however, no U.S. national licenses for journal databases (even though they are common in many European countries).  I don't know why it would be easier to get Federal money to pay the licensing fees for out-of-print and/or mostly outdated books.  In a era of Tea Party politics and concerns of massive annual deficits, it is difficult to imagine the national government stepping in to fund a digital books initiative. 

Comments (0)

New tool: Risk Management Calculator

(by Peter Hirtle)

Those of you who have made it through Copyright and Cultural Institutions know that I am a big believer in the importance of risk assessment when digitizing materials.  The number of instances where we can know with certainty that we can digitize with impunity is very small.  If we limited ourselves to those instances, our digitized collections might consist of nothing but published books issued before 1800.  Risk assessment must be part of every librarian's and archivist's skill set. 

A tool recently released in the UK can help us think in terms of risk.  The Risk Management Calculator was developed to help projects that are building open educational resources (OER) understand the types of factors that might determine specific levels of risk when they include copyrighted items in the resources without the permission of the copyright owner.  The tool asks questions about the material you want to use and how you want to use it, and then generates a numerical score and the level of risk associated with that use.  You can learn more about the tool and its background in this JISC podcast.

The tool was clearly developed with UK law in mind.  For example, it seems to place a higher weight on privacy considerations than would a US repository, and doesn't seem to account for whether the subject whose privacy might be invaded is alive or dead. I tried as a test case a letter not created with commercial intent that you wish to make available for non-commercial research or private study and whose author is both high-profile and traceable, but who doesn’t respond to a permission request.  The tool gives that a level of 20 (out of 150) and suggests this is low risk.  But let’s put in some names: J.D. Salinger, sending a private, noncommercial letter to a 3rd party, and whose estate doesn’t respond to your request to publish it.  I would think that is about the highest risk you could have for a lawsuit (even if the monetary damages are likely to be low).

But even if the analysis it presents is not perfect under US law, the tool is still helpful in organizing our thoughts as we try to assess and weigh risk.  It reminds us that how we make material available can affect our risk in doing so.  And it may help institutions think about how risk-adverse they want to be.

 

 

Comments (0) | TrackBack (0)

Kindles and Libraries

The Bodleian Library declaration to which all readers must subscribe states that they must not “kindle therein.”  When the regulation was first written the concern was fire and its prevention.  Kindles and other ebook readers in libraries are today just as hot a topic, and potentially as dangerous.

4191150551_246e81c529_m

 

 

Kindle in my library,

by MARUYAMA Takahirofrom.

From Flickr.

CC-BY-NC-SA

Thanks to a welcome comment on an earlier post, my attention was drawn to the latest statement of the problem.  Gregory K. Laughlin has made an important contribution to the debate with his article entitled “Digitization and Democracy: the conflict between the Amazon Kindle license agreement and the role of libraries in a free society,” 40 U. Balt. L. Rev. 3.  (Unfortunately, I can’t find a freely-accessible version of the article in SSRN or in an institutional repository, and the University of Baltimore Law Review’s home page does not have the contents of volume 40 posted yet.) 

In the essay, Laughlin extols the historical importance of libraries in American democracy, notes the growing importance of ebooks in publishing, and observes that the restrictions in the Kindle license conflict with the long-established practice of libraries lending books to patrons.  He reviews much of the recent case law concerning what constitutes ownership of digital data, and concludes that courts are likely “to conclude that ‘purchasers’ of e-books from Amazon are not the owners of the content and, therefore, cannot rely on Section 109 of the Copyright Act to convey ownership or even possession of such content to a third party without Amazon’s consent.”  This is in spite of the fact that Amazon frequently indicates that one is purchasing Kindle content with labels such as “buy now.”  Laughlin surmises that “one would expect courts to hold professional librarians to a higher level of sophistication and not permit them to claim that the ‘buy’ and ‘sell’ language on the site represented a change in ownership when the license agreement itself gives ample evidence to the contrary.”  Laughlin concludes that “The only way to guarantee that libraries will be permitted to lend e-books to their patrons is for Congress to amend the Copyright Act to explicitly provide such a right and to make it inalienable (that is, one which cannot be contracted away).”

There is a lot to think about in Laughlin’s essay, but I have two immediate reactions;

1.  I do not see legislative change as being a practical option.  The Section 108 Task Force could not come to agreement over whether the library and archive exceptions should be able to trump contrary terms in non-negotiated license agreements, which are currently enforced through Section 108(f)(4); I don’t see Congress as being less respectful of corporate interests.  The example of Overdrive may be particularly problematic.  Overdrive is proving that libraries can license and deliver books to patrons without acquiring the titles and without using Section 109 (the “first sale” doctrine).  If Amazon were to offer its titles through Overdrive or a similar service, the need for libraries to own digital books that could be lent is greatly reduced.  The problem, of course, is that there are other reasons than just circulation for libraries to “own” digital books, as Laughlin points out.  It may be that libraries’ ready acceptance of the Overdrive model may end up hurting them in the long run.

2.  Laughlin’s piece made me think about what libraries really mean when they say the want to lend Kindles and other ebook readers.  I see at least four options, with various risks associated with each approach:

A.  One approach would be to purchase an ebook reading device (say a Kindle) and an ebook to go on that device and then lend the package to patrons.  While a technical violation of the purchase agreement for the ebook and possibly a violation of the license terms for the software on the device, it is hard to believe that Amazon or a publisher would bring an action against a library for this behavior.  It is, however, an expensive way for a library to acquire content!

B.  A second approach would be to purchase one ebook and load it on multiple devices.  Currently Amazon allows 6 Kindles to be registered to one account, so in theory a library could purchase a single book and load it on 6 devices for loaning.  However, this clear violation of the license terms is more likely to lead to a lawsuit of complaint.  And buying multiple Kindles would still be more expensive than buying hardcover printed books.

C.  A library could “buy” (in reality, license) a book that could be loaded onto a patron’s device for a limited period of time.  This is the Overdrive model; it also seems to be what Laughlin would like to see Congress authorize.  While legal, it does not take advantage of the trivial cost of reproducing digital content.  With this model, one digital file can be used by one patron.  If you want a second patron to be able to read the book, you have to “purchase” a second copy, often at full-price.

D.  What I suspect most libraries would like is to be able to actually purchase an ebook, have a copy of it on a local server (the equivalent of a shelf in the local library), and then allow an unlimited number of patrons to check out the title.  I don’t see this ever happening.

Comments (3)

Next »

Search

Minowwh
Mary Minow
Avatar.jpg.320x320px
Peter Hirtle

Recent Posts

  • Library Digitization Chart
  • Hate speech in library meeting rooms
  • Should libraries remove books written by Bill Cosby?
  • Libraries that want to protect
  • Was CCC formed "at the suggestion of Congress"?
  • What the University of Arkansas controversy can teach us about archival permission practices
  • When a library consortium buys an ebook, does the market dry up for that book? : A Super quick interview with Jo Budler, Kansas State Librarian
  • Zoia Horn made an impact
  • The New Handbook of the Public Domain: Review
  • Norway, Extended Collective Licensing, and Orphan Works
Subscribe to this blog's feed