Information Economics: Not Just for Private Business

Berkeley Center for Law and Technology’s Seventh Annual Privacy Lecture was held on October 6, 2014, on Berkeley Law School’s campus. Moderated by Paul Schwartz, the presentation began with Ross Anderson presenting his recent paper, Privacy versus government surveillance – where network effects meet public choice. The panel continued with responses from Carl Shapiro, James Aquilina, and Anupam Chander before opening the floor to audience questions. This post summarizes each speaker’s presentation.

Ross Anderson: Network Effects and Government Surveillance

To lay the foundation for the discussion, Anderson first introduced two views of money and power. The “Bay Area view” is that money and power are all about network effects, the effect that one user of a good or service has on the value of that product to other people; these network effects help create a platform to which others then add value. Anderson contrasted this with the Washington D.C. view, where power is about having more “tanks and aircraft carriers, which is founded on taxation capacity,” and almost nobody speaks of network effects.

Network effects are characteristic of many IT product and service markets. Network effects tend to lead to dominant-firm markets where the winner takes all. Another common feature of IT product and service markets is high fixed costs and low marginal costs stemming from competition that drove down prices to marginal cost of production. This can make it hard to recover capital investment, unless stopped by patent, brand, or network effects. A third common feature of IT markets is that switching technology platforms is expensive, and companies get “locked-in.” Anderson mentioned the Shapiro-Varian observation: that the net present value of a software company is the total switching costs.

The combination of network effects, low marginal costs, and technical lock-in can make dominant-firm market structures very likely, and explain many security and privacy failures.  First, market races lead to the Microsoft philosophy of “ship it Tuesday and get it right by version 3.” In a market race, companies open their systems to appeal to those who can complement it, such as app writers. Once the race is “won,” companies lock down the system in order to extract rents from its use.  Thus, in many markets (Anderson gave the examples of mainframes, PCs, routers, phones, and social network systems), security is added after the system’s creation. The design is aligned with the platform’s interests at least as much as the users’.

On the other hand, privacy suffers from at least the same problems as security. For example, privacy suffers from asymmetric information, because users don’t know what gets done with their data. Hyperbolic discounting can also be an issue, as many users don’t consider or care about the long-term effects of disclosure. Researchers observe the paradox that people say they want privacy but act otherwise, as evidenced by the fact that most privacy startups fail. In a nod to Berkeley Law, Anderson mentioned that the first workshop regarding information security economics was held here in 2002; the industry has since grown to over 100 active researchers, who explore the models of what is likely to go wrong and attempt to measure it.

Anderson also introduced the concept of economics of surveillance, claiming that network effects were a driving force behind mass surveillance. In addition, the concentration of the industry into a few large service firms also made the concept of PRISM foreseeable, and the concentration of the telecoms industry into a handful of large operators similarly made TEMPORA foreseeable. (It was also described by several journalists in its earlier form of ‘Echelon’.)

Outside of the information security economics realm, network effects also matter in the defense/intelligence nexus, as, for example, neutral networks like India prefer to join the biggest network (read, the United States). However, network effects entangle us with “bad” states which use the same surveillance platforms, leading to problems such as the debate over exports to Syria. Thus, network effects present both political and civil rights problems, as they pull “good” and “bad” parties together. Compared to medieval warfare technology, which was all run on marginal costs, these days “to kill a foreign dictator you can use a single missile shot from a drone – because it’s backed by trillions of capital investment.” Thus, Anderson claims, “warfare has gone from labor to capital,” and has created complex technical “lock-in” games. Furthermore, each country within the “five eyes can decide whether to minimize its citizens’ personal data, but only Canada has done so. (Anderson suspects this is because government forms are confidential once completed in Canada, unlike in the US or the UK.) Law enforcement network governance in particular comes in the form of various models from Interpol to mutual legal assistance treaties, and is slow and cautious.

Yet the question remains, is the world dealing with one network or many? Anderson argues that networks tend to merge (i.e. the Internet absorbs everything else). Anderson noted that intelligence resources are already used for rapid solution of exceptional crimes, and raised the examples of the NTAC and the Communications Data Bill in the UK, and PRISM in the US. And what will the day-to-day effect of this kind of world be? Anderson illustrated the impact of this consolidation as such: “is it okay, for example, if we move into a world where every inhabited space has cameras and mics, and the cameras and microphones are sharing information with the cloud, can we presume consent to information sharing? . . . Or does Mommy have to pick up the bear and hit the ‘I Consent’ button before the bear reads the bedtime story to the kids?”

Anderson concluded with examining long-term issues, with implications ranging from international relations to the separation of power and the rule of law. First, he raised that Britain provides access to 30% of the Internet; what effects might this have on the US? Also, if code is law, architecture is police: what are we embedding in the infrastructure and how will it affect our descendants? And above all, we need to solve the governance issue. The Bay Area v. D.C. gap is not just about whether Snowden’s a whistleblower or a traitor; the economic models are almost totally different. Yet, Anderson concluded, economics of security and privacy models pioneered at Berkeley a dozen years ago could apply here, too.

Carl Shapiro: Economics of Network Effects

In response, Carl Shapiro focused on the economic aspects of Anderson’s paper. Specifically, how do we translate the economics of innovation to government surveillance, the government sector, and international relations? This issue is of particular concern for those who work in both national economy and national security. The main issue Shapiro predicts was that we need to understand the networks that are in use today, and answer questions like “How do we think the network economics for the private sector are translated to the public sector, if at all?” and “What is the interface between law enforcement and surveillance operations?” In many ways, the U.S. leads the world with regards to its information capabilities.

Shapiro answered some of the questions with some observations of his own.  First, Shapiro claimed that the economist will state that organizations will matter as well as incentives due to the cost of sharing information across organizational boundaries.  Second, so far, these organizations have not been effective in terms of sharing their best practices to create consistency. Shapiro jokingly concluded by sharing an alternative title to Anderson’s paper: “Network effects meet public sector disorganization and dysfunction”

James Aquilina: Network Effects, Law Enforcement, and Civil Action

James Aquilina responded next, discussing the law enforcement and civil aspects of surveillance. He self identified as cynical, stating the way he thinks about information and user privacy is ultimately that it’s our choice: we could all choose not to connect to a wifi tower right after landing at the airport, or downloading apps that share our data. He noted that it seems as if people act with impatience and demand for the latest and greatest technology with a blind eye to what the price is to their personal privacy. He added that it is personally frustrating that people rarely discuss the risks associated with more restricted intelligence collection. (Aquilina called this negative impact on the communication between government and ISPs the “tragedy of the Snowden affair.”)

On the civil side, Aquilina noted that merging companies are moving so quickly that they are not thinking of security for what they are building or the impact on their consumers. Furthermore, the costs associated with a breach (whether competitive, advanced persistent threat, or insider threat) is “incredible.” Aquilina noted that the overlap of law enforcement and intelligence was growing by sharing a personal anecdote: when he had started at the US Attorney’s office, tips did not involve computer media; now seizure of cellphones, iPads, and computers used by suspects is normal. This has changed the way routine investigations are being conducted, but the law hasn’t actually kept up with this type of technology. Aquilina concluded by calling for a way to deal with digital evidence: “When you think of the effect of Snowden and that kind of revelation, what is unfortunate is that there is less focus on laws that proscribe criminal activity and an effort to bring them current and the way in which technology is now used as instrumentality of a crime.”

Anupam Chander: Global Due Process

The last presenter was Anupam Chander, who discussed the concept of Global Due Process.  First, in response to Aquilina’s thoughts, Chander mentioned that he was not sure he wants cooperation between ISP and government to be without tension, and had concerns with government abuse of information. In particular, he felt like Mr. Anderson’s paper showed that a global information network can become a global spying network, with one problem being that there is no effective legal constraints on U.S. surveillance of non-U.S. persons abroad. In response to the lack of protection in U.S. law for those outside the U.S., foreign governments have begun trying to “unplug” from American internet. Chander noted that he was able to observe this effect in countries like Brazil, Germany, and Russia, who are trying to stop information from leaving the country instead of the usual tactic of preventing information from entering in the first place.

Chander identified a couple of potential solutions to this issue. First, the USA Freedom Act seeks to end some mass surveillance under Section 215 of the PATRIOT Act. It also institutes amicus curiae to FISA Court to “advocate, as appropriate, in support of legal interpretations that advance individual privacy and civil liberties.” Second, all countries should figure out a way to treat all information with global due process in a similar way to create consistency. Until these solutions can be implemented, Chander left the crowd with a self-help tip reminiscent of Aquilina’s comments: data encryption by the users is a strong way to protect your own data.

Tagged , , , Leave a comment

Oracle v. Google – or How a File Cabinet beat Harry Potter

The Oracle v. Google case, currently on appeal before the Court of Appeals for the Federal Circuit, will decide whether APIs (Application Programming Interfaces) are copyrightable subject matter under section 102(a) and 102(b) of the Copyright Act. But it is also about Harry Potter and a file cabinet – the metaphors Oracle and Google used to explain the character of APIs to the court, Oracle referring to Harry Potter, and Google literally wheeling a file cabinet into the court room to make a point.

So sticking to the metaphors used by the parties, this post is about how Oracle’s Harry Potter got knocked off his broomstick in mid-air by the file cabinet Google hurled at him. It provides an overview of the history leading to the Oracle v. Google case, the technical and legal background, and the reasoning Judge William Alsup used in his order in favor of Google.

The Story Behind the Case

Sun Microsystems developed the Java programming framework, comprising, inter alia, a system of interfaces (APIs) and an implementation of a functions library based thereon, both being commonly referred to as “Java API”. In 2010, Oracle acquired Sun and merged it with Oracle USA, to become Oracle America. The distinctive feature of the Java programming framework is that it allows programs written in Java to run on many different platforms, without the developer having to rewrite the program for all the different operating systems. For that feature, Sun used the catchphrase “Write once, run anywhere” to promote Java.

Google had been developing Android, an operating system aimed at portable devices, since 2005 and released it in 2007. Google first tried to negotiate a license with Sun to use and adapt the Java platform for mobile devices, but the negotiations failed. So Google came up with its own implementation of the Java environment in Android, in order to allow programs that had been written by third party developers in Java to run on Android as well.

Oracle brought suit against Google in August 2010, claiming patent and copyright infringement. Procedurally, Judge Alsup split the case into three different phases covering issues of copyright, patents, and (if necessary) damages. The focus of this post is on the copyright prong only.

The Technical Background in a Nutshell

For a better understanding of the copyright law issues brought up by this case, it is necessary to take a look at some of the underlying technical facts. Allegedly, Judge Alsup even learned to code in Java for the trial, and he presents an extensive introduction into the Java programming framework (p. 4-13 of the order).

In a nutshell, the technical background can be distilled into the following five technical terms:

  • API,
  • function,
  • method,
  • declaration/header, and
  • (method) body.

The API (Application Programming Interface) is the definition of the interface, the abstract specification of how different programs interact with each other (e.g. like the reference system used in a library to help users locate books).

A function is a specific subroutine of a computer program. It is a sub-program that has been sourced out of the program code for efficiency reasons (e.g. like the print-function of an operating system that allows different applications to use that function). In the library-metaphor, this would be the story that a specific book tells.

A method is the concrete implementation of such a function, consisting of a declaration/header and the body of the method. In the library-metaphor, this would be the actual book that contains the specific story.

The declaration/header of the method tells the programs what the function of that method is and how it can be invoked. In the library-metaphor, this would be the title and reference code for a book that one looks up in the library reference system in order to find it in the library.

The body of the method contains the code implementing the function of that method. In the library-metaphor, this would be the actual text of a book in which the story is told (as distinguished from its title and reference number).

Where’s the Infringement?

Google wrote most part of Android’s Java environment itself, in particular the implementations of the functions in the Java libraries (the body of the methods). However, Google used in part exactly the same interface definitions (APIs) as in Oracle’s Java version. By using the same APIs, Google enabled application programmers to call certain functions by the same names in Android’s Java version as in Oracle’s Java version. This made it easier for application programmers to develop compatible programs and, to a certain extent, allowed programs already written in Java to run on Android without the programmer having to rewrite it.

In the copyright prong of the Oracle v. Google case, there was agreement that Google had not literally copied the libraries (i.e. the collections of methods), but created its own implementations (i.e. Google had written its own code in the body of the methods to achieve the same function). The issue at hand was rather whether Google had violated copyright law by its verbatim use of the the declarations/headers, thus replicating the interfaces (APIs) and the structure, sequence and organization (SSO) of the libraries in question.

The Copyright Question at Hand – and its Relevance

The copyright law question at the heart of the Oracle v. Google case is thus whether the APIs and their implementation used by Google in their own Android Java version are copyrightable subject matter under section 102(a) and 102(b) of the Copyright Act.

The answer to this question is of great relevance to the software industry, as in today’s interconnected world, no computer program works as a standalone application, but communicates and interrelates with various other computer programs (e.g. operating systems or other application programs). But in order for programs to communicate, they need to be interoperable. And Interoperability is achieved by means of interfaces, by standardized processes of exchanging and receiving data – by APIs.

Thus if such APIs are found to be copyrightable subject matter, copyright law might confer a monopoly right and thus enable a copyright holder to preclude (or at least control and financially benefit from) any future development – which might impede innovation. On the other hand, developers might argue that such a grant of a monopoly right will offer them a valuable incentive – leading to more and faster innovation.

The Copyright Law Background

Section 102(a) and 102(b) of the Copyright Act define copyrightable subject matter as “original works of authorship fixed in any tangible medium of expression”, 102(a), with the exception of “any idea, procedure, process, system, method of operation, concept, [or] principle [...],” 102(b).

While the requirements of authorship and fixation did not raise any issues in Oracle v. Google, the requirements of originality and expression did.

Originality and the Words and Short Phrases Doctrine

The threshold to meet the originality requirement is rather low and easily met: “Original, as the term is used in copyright, means only that the work was independently created by the author (as opposed to copied from other works), and that it possesses at least some minimal degree of creativity.” Feist Publ’ns, Inc. v. Rural Tel. Serv. Co., 499 U.S. 340, 345 (U.S. 1991).

Nevertheless, certain categories of works may lack the required degree of originality if they are so blunt, short, or obvious that the in fact lack even the slightest “spark of creativity”: This “de minimis” standard excludes words and short phrases from copyright protection, or as the U.S. Copyright Office states: “Copyright law does not protect names, titles, or short phrases or expressions. Even if a name, title, or short phrase is novel or distinctive or lends itself to a play on words, it cannot be protected by copyright.” The courts refer to this principle as the “words and short phrases doctrine” (cf. Sega Enters. v. Accolade, Inc., 977 F.2d 1510, 1524 Fn 7 (9th Cir. Cal. 1992)).

Expression and its Nemeses

Section 102(b) of the Copyright Act contains carve outs from copyrightability, by defining what is not considered “expression” as per section 102(a). The courts have come up with several doctrines and tests to apply section 102(b) in practice:

  • the Altai test;
  • the idea/expression respectively process/expression dichotomy;
  • the merger doctrine, and
  • the scènes à faire doctrine.

The Altai test (also Abstraction-Filtration-Comparison or AFC-test), defines a three-step procedure in order to determine whether the non-literal elements of two computer programs are substantially similar, in order to ascertain whether a computer programs non-literal – but nevertheless copyrightable – “structure, sequence and organization” (SSO) have been infringed upon. The Altai test was formulated in Computer Assocs. Int’l v. Altai, 982 F.2d 693 (2d Cir. N.Y. 1992), based on ideas first expressed by Judge Learned Hand in Nichols v. Universal Pictures Corp., 45 F.2d 119 (2d Cir. N.Y. 1930).

The idea/expression dichotomy (or distinction) stands for the basic understanding in copyright law that the pure ideas understood as abstract principles or fundamental truths are not protectable, only their concrete expression. The process/expression dichotomy (or distinction) stands for the notion closely related to the idea/expression dichotomy that not just ideas (including concepts, principles and discoveries), but also procedures, processes, systems or methods of operation need to be distinguished from their concrete expression, as only the latter (expression), but not the former are protectable by copyright. There is no clear agreement among scholars as to where these concepts were derived from, but most commonly reference is made to Baker v. Selden, 101 U.S. 99, 102 (1880) (for a more extensive analysis cf. Pamela Samuelson, Why Copyright Law Excludes Systems and Processes from the Scope of its Protection, 85 Tex. L. Rev. 1921 (2007)).

The merger doctrine is the exact antipode of the idea/expression respectively process/expression dichotomy in the case where the idea or the process is so inextricably intertwined with the expression that they cannot be separated or distinguished. As Samuelson puts it: “The merger doctrine holds that if there is only one or a very small numer of ways to express an idea, copyright protection will generally be unavailable to that way or those few ways in order to avoid protecting the idea.” Pamela Samuelson, Questioning Copyrights in Standards, Boston College Law Review, Vol. 48:193, 215 (2007).

The scènes à faire doctrine refers to “must do” elements necessary to certain works, to “stock ideas”: “The French use a very expressive phrase in dramatic literature: ‘scenes a faire’ that is, scenes which ‘must’ be done.” Schwarz v. Universal Pictures Co., 85 F. Supp. 270, 275 (D. Cal. 1945). Although this doctrine was first developed in regard to literary works, it became to be understood in a more general manner, not just regarding standard literary works, but also in regard to computer programs, and not just in regard to certain “stock ideas”, but to various external factors constraining an author’s creative choices: “The scenes a faire doctrine, originally developed to recognize that certain plot structures are to be expected from works exploring certain literary or dramatic themes, has been adapted, especially in the software copyright case law, to recognize that epressive choices of subsequent authors may become constrained over time by the emergence of industry standards.” Pamela Samuelson, Questioning Copyrights in Standards, Boston College Law Review, Vol. 48:193, 215 (2007).

The Metaphors used by Oracle and Google

In order to explain to the court how best to apply these principles, tests and doctrines to the APIs at the heart of the case, both Oracle and Google used metaphors. They both tried to present their point of view regarding copyrightability of APIs by choosing a metaphor that would, once the court applied the above mentioned doctrines, lead either to a finding of copyrightability (Oracle) respectively non-copyrightability (Google).

Oracle made its point for copyrightability by comparing the affected method libraries to a Harry Potter novel:

“Ann Droid wants to publish a bestseller. So she sits down with an advance copy of Harry Potter and the Order of the Phoenix – the fifth book – and proceeds to transcribe. She verbatim copies all the chapter titles – from Chapter 1 (‘Dudley Demented’) to Chapter 38 (‘The Second War Begins’). She copies verbatim the topic sentences of each paragraph, starting from the first (highly descriptive) one and continuing, in order, to the last, simple one (‘Harry nodded.’). She then paraphrases the rest of each paragraph.”

(Oracle Opening Brief and Addendum of Plaintiff-Appellant, 1, February 11, 2013).

Google, on the other hand, referred to Section 102(b) of the Copyright Act and used a file cabinet to point out that APIs and their SSO are just a system of organization, a method of operation that allows to call up specific functions in a structured way:

“So just by writing the API, Java.lang.Math.Max(), that source code appears and comes into the program.

And now I actually created — excuse me, your Honor. I’m going to approach the cabinet. I actually created a cabinet to illustrate this because, again, I think it’s important for everybody to understand what we’re talking about when we say structure and organization of an API.

This is a cabinet. This is the Java language package. It happens to be a file cabinet. There are 37 of these that they are complaining about. They are not complaining about using the language, because that’s free. The names were all free. The complaint is about the system of organization. But you need that in order to program in Java.

So if I want to find this max() function. I write java.lang.Math.max() and the system knows I go to the java.lang package. I open the Math drawer.

Now, in the Math drawer are all the methods that are in the math class in Java. And by the way, they are typically organized alphabetically. Nothing too magic about that. They are organized alphabetically. But one of them would be my max() folder. And I take my max() folder out and inside it is the source code. That’s the original source code that Google wrote. [...]

And what we’re talking about here is nothing more than this system of organization that has been around for years and programmers had been using whenever they program in Java. That’s what is at issue in this case.”

(Transcript of Jury Trial Proceedings, 263:2-264:6, April 17, 2012, Case 10-cv-03561, Doc. 943 (emphasis added))

Judge Alsup’s Metaphor and His View on Copyrightability of APIs

In his order of May 31, 2012, Judge Alsup also uses an analogy and characterizes APIs by using the picture of a library:

“An API is like a library. Each package is like a bookshelf in the library. Each class is like a book on the shelf. Each method is like a how-to-do-it chapter in a book. Go to the right shelf, select the right book, and open it to the chapter that covers the work you need.”

(Order re Copyrightability of certain replicated elements of the Java application programming interfac, 5:16-5:18, Case 10-cv-03561-WHA, Doc. 1202, May 31, 2012).

In the first pages of his order, Judge Alsup gives a detailed introduction into the technical intricacies of the Java programming framework (p. 4-13), which allows him to clearly distinguish what exactly Oracle accused Google of having replicated:

“All agree that Google was and remains free to use the Java language itself. [...] All agree that the six-thousand-plus method implementations by Google are free of copyright issues. The copyright issue, rather, is whether Google was and remains free to replicate the names, organization of those names, and functionality of 37 out of 166 packages in the Java API, which has sometimes been referred to in this litigation as the “structure, sequence and organization” of the 37 packages.”

(Order re Copyrightability of certain replicated elements of the Java application programming interfac, 6:25-7:03, Case 10-cv-03561-WHA, Doc. 1202, May 31, 2012).

Judge Alsup then traces the development of copyright law through the following cases and materials:

Based on that extensive review, Judge Alsup concludes that the issue at hand in the Oracle v. Google case is controlled by (a) the merger doctrine, (b) the words and short phrases doctrine, (c) the idea/expression respectively the process/expression dichotomy/distinction, and (d) the “no sweat of the brow” doctrine according to Feist (33:13-34:05 of the order).

In application of these doctrines, Judge Alsup concludes that “[f]unctional elements essential for interoperability are not copyrightable” (34:01-34:02 of the order) and that even though there might be creativity in the definition and creation of APIs, they are nevertheless a system or method of operation and as such not copyrightable subject matter:

“That a system or method of operation has thousands of commands arranged in a creative taxonomy does not change its character as a method of operation. Yes, it is creative. Yes, it is original. Yes, it resembles a taxonomy. But it is nevertheless a command structure, a system or method of operation – a long hierarchy of over six thousand commands to carry out pre-assigned functions. For that reason, it cannot receive copyright protection – patent protection perhaps – but not copyright protection”

(37:15-38:02 of the order)

In conclusion, Judge Alsup thus followed the argument made by Google in its file cabinet analogy (for which Judge Alsup substituted his library metaphor) and held the APIs and the SSO of their implementation in the method libraries to be unprotectable under Section 102(b) of the Copyright Act.

The Pending Appeal

Both parties have appealed Judge Alsup’s order, and the appeal is currently pending before the U.S. Court of Appeals for the Federal Circuit (due to the fact that the original case contained patent law questions). The oral arguments have been held on December 4, 2013, and as of today, April 2014, an appellate decision could be expected anytime soon.

Florian Mueller, author of the FOSS PATENTS blog, interprets the proceedings of the oral arguments as a sign that Judge Alsup’s order will be reversed (Florian Mueller, Oracle apparently winning Android-Java appeal against Google — API declaring code copyrightable, December 04, 2013, and Forian Mueller, Detailed analysis of Federal Circuit hearing in Oracle v. Google: copyrightability is certain, December 06, 2013).

However, other sources have questioned Mueller’s neutrality by pointing at preexisting relationships between Oracle and Mueller (Charles Cooper, Oracle names bloggers, others it paid to comment on Google trial, August 17, 2012), as disclosed by Oracle according to a court order.


It is far from clear how the Court of Appeals will decide. Based on the thorough technical and copyright law analysis by Judge Alsup, the court might affirm the district court’s decision. But based on the inquisitive proceedings of the oral arguments, one should not be too surprised if Judge Alsup’s order were to be reversed.

In either case, the question at the heart of the Oracle v. Google case has the potential to go all the way to the Supreme Court and to set a new landmark case in the field of software copyright law.

To sum it up using the parties’ metaphors, Oracle’s Harry Potter has been hit by Google’s file cabinet and knocked off his broomstick in the district court. But the day isn’t over yet, and with a helping hand of the Court of Appeals, Harry Potter might just get back up on his broomstick yet again – for his final battle against the file cabinet to be fought before the Supreme Court.

Tagged , , , , , , , , , , , , , 1 Comment

Winners of 2014 Notes and Comments Competition

BTLJ is proud to announce the winners of the 2014 Student Writing Competition. Congratulations to our winners, and thank you to all participants for your submissions!

First Place: “Compelling Passwords from Third Parties: Why the Fourth and Fifth Amendments Do Not Adequately Protect Individuals When Third Parties Are Forced to Hand Over Passwords,” by Sarah Wilson (Northwestern University School of Law)

Second Place: “Patenting Proteins after Myriad,” by Priti Phukan (University of San Diego School of Law)

Third Place: “Establishing Standing in Risk of Future Identity Theft Cases,” by Rebecca Dell (University of Pennsylvania School of Law)

Aldo J. Test Award for Best Berkeley Law Submission: “Proposed Limitations on a Copyright Exemption for Museums,” by Julie Byren (UC Berkeley School of Law)

/ Leave a comment

New Rules applying to IP Licenses in Europe starting May 1, 2014

On March 21, 2014 the European Commission, pan-European enforcer of antitrust rules, adopted a new version of its Technology Transfer Block Exemption Regulation and accompanying Guidelines. The new Regulation, which will take effect on May 1, 2014 once the Regulation currently in force expires on April 30 (the “old Regulation”), continues along the same lines and exempts certain categories of licensing agreements from the prohibition against anti-competitive agreements envisaged in Article 101 of the Treaty on the Functioning of the European Union (“TFEU”). If a certain agreement falls under the Regulation, the agreement will be “deemed to have no anticompetitive effects or, if they do, the positive effects outweigh the negative ones”. The most radical changes relate to termination and grant-back clauses (which will never be exempted), passive sales (which can never be hindered in accordance with other EU legislation) and IP settlements and pools (for which additional guidance is provided).

While the new Regulation and Guidelines will start applying in a few days, a one-year transitional period (until April 30, 2015) is contemplated in Article 10 of the new Regulation. This transitional period grants the benefits of the Block Exemption to all those agreements which were compliant with the old Regulation yet non-compliant with the new Regulation.

Continue reading

Tagged , , , , Leave a comment

Proving a Likelihood of Confusion Remains an Uphill Battle for Trademark Owners in Keyword Advertising Cases

How confused are you when you search terms and see paid advertising generated as the result of the keywords you inserted? An increasing number of cases show a likelihood of confusion is a difficult element to meet by plaintiff trademark owners.

People use search engines to weed through millions of products that are available on the Internet. Search engines have thus become an important marketing tool to direct web traffic to businesses and a major source of revenue for search engine providers, such as Google.  Search engine providers sell search terms or keywords to advertisers, allowing advertisement links to appear on the results page when the purchased keywords are searched. Google generated over 50 billions in revenue for its advertising program in 2013.

In 2004, Google updated its Adwords advertising policy permitting advertisers to purchase trademarks as search terms with or without the trademark owners’ approval. Thus, if a soft drink company buys a keyword “coca cola,” anytime a user searches the term “coca cola,” Google would display both the paid advertising listings that would show the soft drink advertiser’s link and the free organic search results. This controversial policy resulted in more than 20 lawsuits filed against Google.

Continue reading

Tagged , , , Leave a comment

2014 Symposium: Fair Use for Free, or Permitted-but-Paid?

BTLJ is excited to welcome Jane C. Ginsburg of Columbia Law School on April 3–4, 2014 to the 18th Annual BTLJ/BCLT Symposium: The Next Great Copyright Act.

This is a summary of Professor Ginsburg’s topic of discussion and forthcoming article:

Fair use has gone off the rails, first with the Sony “Betamax” decision, and more recently with the transformation of “transformative use” from a factor fostering new creativity to one favoring new copyright-dependent business models and socially beneficent reiterative uses.  We should cease muddling authorship-grounded fair uses with judge-made exceptions whose impetus derives from distinct considerations.  Moreover, I suggest that the other exceptions should not always produce free passes.  Instead, I propose that many of the current social subsidy fair uses and market failure fair uses be “permitted but paid,” and explore how we might implement that proposal.

Tagged , , , Leave a comment

2014 Symposium: Copyrightable Subject Matter in the Next Great Copyright Act

BTLJ is excited to welcome R. Anthony Reese of UC Irvine School of Law on April 3–4, 2014 to the 18th Annual BTLJ/BCLT Symposium: The Next Great Copyright Act.

This is a summary of Professor Reese’s topic of discussion and forthcoming article:

Drafting the Next Great Copyright Act will require defining the scope of subject matter protected by the Act. This key aspect of framing a revised copyright statute determines what can and cannot be protected by copyright and represents Congress’s judgment about which creations of authors need copyright’s protection and which should remain free from claims of ownership that would restrict copying. This Article proposes four principles to guide the drafters of the Next Great Copyright Act in framing the act’s subject matter provisions. First, Congress should expressly enumerate all of the categories of works that are protected by the statute, and should not draft the statute to allow Courts or the Copyright Office to recognize copyright in additional, unenumerated categories. Second, Congress should decide which categories of work to actually protect, and should not simply grant protection to everything that constitutes the “Writing” of an “Author” under the Constitution’s Copyright Clause. Third, Congress should define each of the categories of copyrightable works to which it grants statutory protection. And fourth, Congress should grant copyright protection to a compilation or a derivative work only if that compilation or derivative work falls within one of the expressly enumerated categories of protectable works.

Tagged , , Leave a comment

2014 Symposium: Copyright Licensing and Fair Use

BTLJ is excited to welcome Rebecca Tushnet of Georgetown Law School on April 3–4, 2014 to the 18th Annual BTLJ/BCLT Symposium: The Next Great Copyright Act.

This is a summary of Professor Tushnet’s topic of discussion and forthcoming article, All of This Has Happened Before and All of This Will Happen Again:

Claims that copyright licensing can substitute for fair use are nothing new.  This cycle’s variation in the licensing debate, however, offers a few tweaks.  First, the new licenses often purport to allow the large-scale creation of derivative works, rather than the mere reproduction that was the focus of earlier blanket licensing efforts.  Second, the new licenses are often free, or even offer opportunities for users to profit.  Rather than demanding royalties, copyright owners just want a piece of the action—along with the right to claim that unlicensed uses are infringing.  In a world where licenses are so readily and cheaply available, the argument will go, it is unfair not to get one.

These new attempts to expand licensing in ways that take into account the digital economy and the rise of “user-generated content” also face a fair use doctrine that is in some ways less favorable to copyright owners than it was several decades ago, when a few key decisions supported the rise of (allegedly) blanket reproduction licenses.  While copyright owners have lost some significant cases in court, they are trying to change the facts on the ground to achieve many of the same benefits that they could get from a legally established right to license transformative uses.  This short paper will describe recent innovations in licensing-by-default in the noncommercial or formerly noncommercial sphere  and discuss how the licensed versions differ from their unlicensed alternatives in ways both subtle and profound.  These differences, which change the nature of the communications and communities at issue, help explain why licensing can never substitute for transformative fair use, even when licenses are routinely available.

Initiatives such as YouTube’s Content ID, Getty Images’ new free embedding of millions of its photos, and Amazon’s Kindle Worlds all attempt to get internet users accustomed to copyright owner supervision—with a very light, rarely visible touch—of uses that are individually low-value but might produce some aggregate income, or at least some consumer behavior data that could itself be monetized.  While there’s room in the copyright ecosystem for these initiatives, it would be a grave mistake to conclude that the problem of licensing has finally been cracked and that fair use can now, at last, retreat to a vestigial doctrine. Ultimately, as courts have already recognized, the mere desire of copyright owners to extract value from a market—especially when they desire to extract it from third parties instead of licensees—should not affect the scope of fair use.

Tagged , , , Leave a comment

2014 Symposium: Reforming Section 108 for Libraries and Archives

BTLJ is excited to welcome David R. Hansen of UNC Law School on April 3–4, 2014 to the 18th Annual BTLJ/BCLT Symposium: The Next Great Copyright Act.

This is a summary of Mr. Hansen’s topic of discussion and forthcoming article:

U.S. libraries, archives, and museums are stewards of some of the largest collections of copyrighted content in the world. These institutions hold well over 2.8 billion items, the vast majority published since 1922, the year in which the public domain effectively ends for many users in the United States. This article is about how Section 108 reform can help (and potentially hurt) these organizations in their efforts to preserve and make this content more available to the world.

The existing law of fair use, first sale, and statutory limitations on remedies are all critically important tools for further opening library and archive collections to the world. Indeed, with current library practice, Section 108 is of little use for many library projects; the flexible doctrine of fair use is relied upon instead. Any effort to reform Section 108 must, as its first goal, preserve the availability of fair use and these other tools. But within that context, a modest update to Section 108 could provide an important safe harbor for libraries and archives that seek more certainty about making their collections available online.

Tagged , , , Leave a comment

2014 Symposium: Legislating Digital Exhaustion

BTLJ is excited to welcome Aaron Perzanowski of Case Western Reserve University Law School and Jason Schultz of New York University Law School on April 3–4, 2014 to the 18th Annual BTLJ/BCLT Symposium: The Next Great Copyright Act.

This is a summary of the authors’ topic of discussion and forthcoming article:

The shift to a digital distribution model, from one premised on selling physical artifacts to one defined by transferring data, is among the most important changes in the markets for copyrighted works since the enactment of the 1976 Copyright Act. The disconnect between our current statutory rules and this new reality of the copyright marketplace is particularly evident when it comes to the question of exhaustion. The first sale doctrine in Section 109 was constructed around a mode of distribution that is rapidly becoming obsolete. As a result, the benefits and functions it has long served in the copyright system are at risk. Building on our earlier work, this Article will argue that a meaningful exhaustion doctrine should survive the digital transition. After explaining the two primary hurdles to digital exhaustion under the existing statutory regime, we outline two possible approaches to legislating digital exhaustion, concluding that a flexible standards-based approach that vests considerable authority with the courts is the better solution.

Tagged , , Leave a comment