Skip to main content

Code, and other laws... (part 1)

There are tens of millions of RSS and Atom feeds published on the Web.  And nearly all of them are copyrighted.

If an author doesn't explicitly give up all rights to a work, which might be a bit tricky,it's automatically copyrighted in the United States and most othercountries.  Of course the same is true of web pages.  But web pages aremostly intended to be viewed in a browser.   Feeds are generallyintended to be syndicated, which means that their content is going tobe sliced and diced in various and unforeseeable ways.  This makes adifference.

In what ways is an application allowed to copy and present a given feed's content?  To start with, it can do things covered by fair use (*). There are some interesting issues around what exactly fair use means inthe context of web feeds, but ignore those for the moment.  What aboutcopying beyond what fair use allows? 

It would be awfully helpful if every feed simply included a machine readable license.  For example, a <link rel="license"href="..."/> element( We could then write code that follows the author's license for things beyond fair use.

Specifically, if a feed author wanted to put their feed content in the public domain, they would simply link to the Creative Commons public domain license which includes the following RDF code:
<rdf:RDF xmlns="" xmlns:rdf="">
<License rdf:about="">
<permits rdf:resource=""/>
<permits rdf:resource=""/>
<permits rdf:resource=""/></License></rdf:RDF>
The code here is a machine-readable approximation of "put this in the public domain". 

Alternatively, if a feed author just wanted to require attribution,they'd instead use  Toallow copying for non-commercial use only, they'd use the popular  This license meansthat the content must be attributed, and may be freely copied only fornon-commercial uses.   Plus, of course, fair uses.

According to the Creative Commons proposed best practice guidelines,the non-commercial license would mean that a web site re-syndicatingthe feed would not in general be able to display advertisements next tothe feed content.  Such an application could fall back to fair use onlyfor that feed (perhaps showing only headlines), or it could suppressads for that one feed.  The main point is that it would know what itneeded to do.

So, in this perfect world where everything is clearly licenced, I thinklife is fairly simple.  Let me know if you think I'm missing something.

In part 2 I intend to return to the messy real world and start complicating things.
(*) ...or other applicable national legal codes, since fair use applies only in the U.S. as Paul pointed out.



  1. "[an application] can do things covered by fair use" you say. Yet fair use is a US only concept, so that premise only applies to feeds consumed in the US. IANAL so I don't know how fair use applies to materials from other countries. I would say that making information available through a feed means you expect it to be copied and re-presented, and so I would rephrase the question at the end of the paragraph as "What about copying beyond what you would expect to be done with a feed?"

  2. Paul -- Yes, please forgive my US-centricism.  This is a very simplified, abstracted view of things and trying to keep in mind the various national legal codes would have brought this to a grinding halt.  So, I'll add this to the "complications" follow-up.  For the time being, to "fair use" above please add "...or other applicable national legal codes".

  3. Valuable resource of law news summaries:


Post a Comment

Popular posts from this blog

The problem with creation date metadata in PDF documents

Last night Rachel Maddow talked about an apparently fake NSA document "leaked" to her organization.  There's a lot of info there, I suggest you listen to the whole thing:

There's a lot to unpack there but it looks like somebody tried to fool MSNBC into running with a fake accusation based on faked NSA documents, apparently based on cloning the document the Intercept published back on 6/5/2017, which to all appearances was itself a real NSA document in PDF form.

I think the main thrust of this story is chilling and really important to get straight -- some person or persons unknown is sending forged PDFs to news organization(s), apparently trying to get them to run stories based on forged documents.  And I completely agree with Maddow that she was right to send up a "signal flare" to all the news organizations to look out for forgeries.  Really, really, really import…

Why I'm No Longer On The Facebook

I've had a Facebook account for a few years, largely because other people were on it and were organizing useful communities there.  I stuck with it (not using it for private information) even while I grew increasingly concerned about Facebook's inability to be trustworthy guardians of private information.  The recent slap on the wrist from the FTC for Facebook violating the terms of its prior consent agreement made it clear that there wasn't going to be any penalty for Facebook for continuing to violate court orders.
Mark Zuckerberg claimed he had made a mistake in 2016 by ridiculing the idea of election interference on his platform, apologized, and claimed he was turning over a new leaf:
“After the election, I made a comment that I thought the idea misinformation on Facebook changed the outcome of the election was a crazy idea. Calling that crazy was dismissive and I regret it.  This is too important an issue to be dismissive.” It turns out, though, that was just Zuck ly…

Personal Web Discovery (aka Webfinger)

There's a particular discovery problem for open and distributed protocols such as OpenID, OAuth, Portable Contacts, Activity Streams, and OpenSocial.  It seems like a trivial problem, but it's one of the stumbling blocks that slows mass adoption.  We need to fix it.  So first, I'm going to name it:

The Personal Web Discovery Problem:  Given a person, how do I find out what services that person uses?
This does sound trivial, doesn't it?  And it is easy as long as you're service-centric; if you're building on top of social network X, there is no discovery problem, or at least only a trivial one that can be solved with proprietary APIs.  But what if you want to build on top of X,Y, and Z?  Well, you write code to make the user log in to each one so you can call those proprietary APIs... which means the user has to tell you their identity (and probably password) on each one... and the user has already clicked the Back button because this is complicated and annoying.