2009/12/22

Open is as Open Does

Why I'm happy to be working at Google: The meaning of open (Official Google Blog).  Open technology and open information are ultimately about freedom and control.  Open technology gives everyone freedom to invent, compete, and improve the world; open information gives people ultimate control over the literal bits that belong to them.  These are aspirational, ambitious, long-term goals.  And yes, in the long run they benefit Google as well as the world  but the point is that they do this, at least in principle, by increasing value for everyone.  That's a good goal.  It's a difficult goal, and it takes a lot of up-front investment and a long term perspective — something the world needs more of right now.

Happy holidays!

2009/12/12

Massively Collaborative Mathematics via Blog Comments

This year, the hive mind proved a theorem, and is going to submit a paper under the pseudonym D.H.J. Polymath.  The NYT reports:


In January, Timothy Gowers, a professor of mathematics at Cambridge and a holder of the Fields Medal, math's highest honor, decided to see if the comment section of his blog could prove a theorem he could not.
In two blog posts — one titled "Is Massively Collaborative Mathematics Possible?" — he proposed an attack on a stubborn math problem called the Density Hales-Jewett Theorem. He encouraged the thousands of readers of his blog to jump in and start proving. Mathematics is a process of generating vast quantities of ideas and rejecting the majority that don't work; maybe, Gowers reasoned, the participation of so many people would speed the sifting.

It's unfortunate that the NYT doesn't link to the blog, because the procedural discussion is very interesting.  Part of the kickoff is setting the norms for participation, which are of course aimed at helping to move the massively distributed discussion forward.  From an educational point of view, it's interesting to see the public record of how the discussion actually evolves -- that is, how people actually do high level mathematics.  There remain thorny questions about credit, as publications are currency, and it's not clear how credit would work in this sort of collaboration.  Gowers writes:
It seems to me that, at least in theory, a different model could work: different, that is, from the usual model of people working in isolation or collaborating with one or two others. Suppose one had a forum (in the non-technical sense, but quite possibly in the technical sense as well) for the online discussion of a particular problem. The idea would be that anybody who had anything whatsoever to say about the problem could chip in. And the ethos of the forum — in whatever form it took — would be that comments would mostly be kept short. In other words, what you would nottend to do, at least if you wanted to keep within the spirit of things, is spend a month thinking hard about the problem and then come back and write ten pages about it. Rather, you would contribute ideas even if they were undeveloped and/or likely to be wrong.
What's next up?  Possibly, the origin of life.

(Via /Message.)

2009/12/03

"RE": Open alternative to ReTweet

Just discovered a useful alternative microsyntax to Twitter's RT: "RE" (via Stowe Boyd). It's been available in Tweetdeck's menu since this summer but I hadn't noticed. It's effectively an equivalent to email's In-Reply-To that creates mail threads, and it uses links rather than copying tweets, so the 140 character problem is solved.
I love RE's potential because it can solve all the problems with newRT and oldRT, and opens up some new possibilities as well:




gregarious I really like when the clocks change



about 2 hours ago from Twitter
Retweeted by 3
stoweboyd: I disagree with @gregarious about daylight savings time
themaria: he sleeps all day anyway
brianthatcher: I never come out in the light of day


Note that the target of an RE doesn't have to be a tweet; it can be any URL. So RE can also reach outside the Twitterverse and unify all sorts of threaded conversations. In particular, if the target of an RE were Salmon-enabled, a tool could trivially send a salmon representing the RE back to the source -- allowing for truly distributed conversations that retain threading, authorship, and provenance.
The only problem with RE at the moment is that it turns the target into an opaque URL, which looks ugly in today's clients. This can be solved, as shown above, with smarter RE-aware clients. It's not a huge amount of work either -- and the payoff is that you can have your newRT collapsed/summarized view and still have conversations. And even better, they can be open and distributed beyond just Twitter.

2009/12/01

OpenID delegation for Google's OP



Following up on Brad's announcement of last week, I wanted to test out OpenID delegation. I just set up http://www.johnpanzer.com/ to delegate to Google's OP, which lets me use my own domain as my identifier.  Earlier I was using AOL's OP; you can pick any OP you like without changing your identity.  Doing this for a static web page like this is fairly simple:
  • Make sure you know your Google Profile URL; it's useful to pick a nice readable one (which will be taken from the GMail namespace).  Mine is jpanzer.at.acm.
  • Add the following two links to the <head> section of your page HTML, substituting your own Google Profile link for jpanzer.at.acm:
<link rel="openid2.provider" href="https://www.google.com/accounts/o8/ud?source=profiles" > 
<link rel="openid2.local_id" href="http://www.google.com/profiles/jpanzer.at.acm" > 

Note that the Google OP supports only OpenID 2.0.  This is a Good Thing.

2009/11/23

"Algorithmic Authority" (via Clay Shirky)

Clay Shirky recently wrote up some thoughts on algorithmic authority, well worth reading: http://www.shirky.com/weblog/2009/11/a-speculative-post-on-the-idea-of-algorithmic-authority/.  When you merge "publishers" and "readers" the People Formerly Known as the Audience are no longer a statistical mass of consumers, but are themselves producers of content with their own authority.  This long tail of content production makes the ability to do personalized rankings and authoritativeness far more important.  Back when there were three networks, you could rely on hearsay or statistical aggregation of authority to figure out which one had the 'best' news coverage.  With a million or a billion, you need something much more fine-grained and low-overhead.  Thus "algorithmic authority", though it also depends on user actions -- such as linking to and recommending someone else or their content.

2009/11/12

Twitter's NewRetweet and Darwinian Selection

A couple of days ago, I tweeted "I like Twitter's new Like feature. Just not sure why they called it Retweet."  Which is a bit snarky; @ev and the crew deserve more than 140 characters, because they've clearly put a lot of time and thought into this feature and it solves a lot of real problems. People have critiqued some of the superficial problems; I'd like to focus on just one core issue, which @ev also touches on, namely the inability to add commentary.  Which turns this into (as one re-tweeter remarked) a streamlined "propagate" feature.  This does let a meme propagate at the speed of Twitter and yet be coalesced for display purposes, and that's exactly the right thing for some memes.  In some ways it's a very democratic, Darwinian process; the tweets that are most likely to be NewRetweeted will undoubtedly get propagated more efficiently.

But there's another piece to Darwinian selection:  It's not just reproduction of the fittest, but reproduction with variations.  Sometimes, that's where the real value lies.  If you look at biological ecosystems, ultimately, it's where all the value lies.  That's where you get new ideas, by riffing off of other ideas, modifying them, and mashing them up.  Making this harder is, well, bad.

It's not like they haven't thought of this.  @ev does give some hints of future possibilities:
What about those cases where you really want to add a comment when RTing something? Keep in mind, there's nothing stopping you from simply quoting another tweet if that's what you want to do. Also, old-school retweets are still allowed, as well. We had to prioritize some use cases over others in this release. But just as Twitter didn't have this functionality at all before, people can still work around and do whatever they want. This just gives another option.
This ignores the change in affordances with NewRetweet.  Tools support ClassicRetweet today; I fear they're gearing up to switch over to the shiny NewRetweet, putting a barrier in front of users who want to propagate-with-comments.  It'd going to be even more confusing because they're keeping the name Retweet but taking a way an important piece of functionality.  We can't even talk about the differences without inventing new vocabulary — thus NewRetweet vs. ClassicRetweet.  I hope that reproduction-with-variations will not go the way of the dodo, but I fear that it will if the ecosystem continues on its current path.

2009/11/04

One XRD To Rule Them All

Discussing The Hammer Stack most of the day today.  Resolving several issues.  Will keep going until coffee runs out.  XRD will bind all services together and rule them all.  Notes here.

Addendum Nov 5:
One Protocol to rule them all, One Protocol to find them,
One Protocol to bring them all and in the DNS bind them
In the Land of Standards where the Shadows lie.

2009/10/27

The Salmon Protocol: Introducing the Salmon Project

A few days ago, at the Real Time Web Summit, we had a session about Salmon, a protocol for re-aggregated distributed conversations around web content.  I was hoping for some feedback and to generate some interest, and I was overwhelmed by the positive reactions, especially after Louis Gray's post "Proposed Salmon Protocol aims to unify Conversations on the Web". Adina Levin's "Salmon - Re-assembling distributed conversations" is a good, insightful review as well. There's clearly a great deal of interest in this, and so I've gone ahead and expanded Salmon's home at salmon-protocol.org with an open source project, salmon-protocol.googlecode.com, and a mailing list, groups.google.com/group/salmon-protocol.
The project is a home for all types of open source code related to Salmon, but particularly reference implementations and validators.  At the moment, it contains the Python/Google AppEngine source code for the demo at salmon-playground.appspot.com. I also intend to host the actual spec text there for the moment, along with the reference implementation code, and develop both in parallel based on discussions on the mailing list.  The list is for discussions about the Salmon Protocol and its implementations.
This is also a call to action.  If you are interested in helping to define this new protocol, or work on a reference implementation or validator, please join the mailing list and introduce yourself.

2009/10/21

Use your email address as an OpenID

We're not quite there yet, but soon you'll be able to use any reasonable user identifier as an OpenID.  Most importantly, email addresses.  Dirk just wrote a great blog post explaining "Email Addresses as OpenIDs" which goes into the nitty gritty.  Basically, this all just needs a finalized XRD spec to rely on, and adoption of same (and ability to use acct: URIs) in the next rev of the OpenID specification.  The upcoming IIW will be an great opportunity to make some progress on these.

2009/09/30

Really awesome new look for Fake Steve Jobs

New template designed by Tina of the Blogger team. Plus tons of snark and even actual content from ol' Fake Steve. Nice!

in reference to: The Secret Diary of Steve Jobs (view on Google Sidewiki)

2009/09/27

Mint Promises

Mint is a great service, and I'm actually trusting it quite a bit.  But their re-assurances are giving me the willies:
Your credentials are safe on Mint.com.  We use bank-level encryption to secure your login credentials, they cannot be compromised. We are establishing a read-only connection to your bank, we cannot move or transfer money. -- mint.com
Of these 3 statements, the first is hopefully true for some reasonable value of "safe".  The second and third statements are demonstrably untrue, and they undermine the first assertion.  (As a matter of fact, when my bank offered a "read only" username/password mechanism, I tried it out with Mint -- Mint choked on the results.)  Mint has full access and can impersonate me to my bank.  I strongly dislike this situation and want Mint and the banks to change this.

Mint + Banks:  Please implement a least-privilege access mechanism.  OAuth would be great, but frankly anything including a read-only password would be better than today's situation.  Mint: You really want to be able to prove that you couldn't be culpable if there is a leak or a bug.  Banks:  You don't want people impersonating your customers, do you?  Do it the right way, guys.

2009/08/17

2009/08/13

Camel sighting at Google

At Building 46, off Charleston, a camel. I'm sure there's a good explanation, but for some things I just prefer the mystery.

2009/08/07

Open Issues for Discovery / Webfinger

The problem: Discover information that joe@example.org wants to publish to the world; things like their preferred identity provider, their public avatar, public contact methods, etc. Same mechanism should basically work for joe@example.org or http://example.org/joe, no wheel reinvention.

The Webfinger session at the last IIW was quite productive in the sense that it produced a long list of open issues that need resolution. The whiteboard snapshot to the right (stitched together thanks to @factoryjoe) shows the list, albeit in low res form. Translating the notes, and giving my takes:

Starting assumption : Domain owners need to play along. We're not trying to handle the case where joe@example.org wants to be discoverable, but doesn't control example.org and the domain owner doesn't want to implement discovery.

Open Issues

Location of host-meta data: Older spec calls for this to be at /host-meta for every domain; Mark Nottingham has updated his proposal to create a /.well-known/ directory instead and put host-meta in there; I'm +10 to that.

Should discoverers try www.example.org if example.org itself doesn't support discovery? My take: No, if example.org doesn't provide the discovery info directly it can do a 3xx redirect to a site that does. Don't complicate the protocol.

Should discoverers try https: URLs first? My take: No; this is not confidential data, and if you want source verification, it's more complicated than just using SSL and there are other solutions coming down the pike that are better.

What should the protocol do with 3xx's? This clearly needs a working group convened to decide on the exact correct flavor of 3xx to use in different situations. But, don't screw over people who need to move web sites and who leave a 301 to point to a new location.

Should it support other name@domain identifiers beyond email? Yes, of course.

Proxy problems with Accept: & Vary for getting discovery data from top level domains: This goes away with /.well-known.

What should the exact template semantics be (just {id}, or {local} + {domain} be for mapping a name@domain ID to a URL? Doesn't matter, pick one.

Must the discovery data be signed to enable the pattern to work? No, clients should make their own security decisions based on the evidence given. Signing is a good idea; make it easy to accomplish.

We need to document best practices on doing all of this stuff. Yes.


2009/07/17

Health Insurance Insider Tells it Like it Is

Sometimes, the existing order melts away when its defenders' cognitive dissonance reaches deafening levels and they defect to the revolutionaries. Read this Bill Moyers interview with Wendell Potter -- money quote:
BILL MOYERS: Why is public insurance, a public option, so fiercely opposed by the industry?

WENDELL POTTER: The industry doesn't want to have any competitor. In fact, over the course of the last few years, has been shrinking the number of competitors through a lot of acquisitions and mergers. So first of all, they don't want any more competition period. They certainly don't want it from a government plan that might be operating more efficiently than they are, that they operate. The Medicare program that we have here is a government-run program that has administrative expenses that are like three percent or so.

BILL MOYERS: Compared to the industry's--

WENDELL POTTER: They spend about 20 cents of every premium dollar on overhead, which is administrative expense or profit. So they don't want to compete against a more efficient competitor.
The health insurance industry is driving down the same road that led to the financial industry's implosion. Except in this case, the casualties aren't balance sheets, it's us, our familes, and our children.

2009/05/20

Webfinger White Board at IIW

Whiteboard from the Webfinger session at IIW, in the form of an iPhone triptych:



There were a lot of good issues raised in this session, and we didn't even get to talking about the XRD schema. The right hand side of the board lists the open issues that need resolving before we can deploy real code and validators.

2009/05/16

IIW8 Next Week

Gah! The Internet Identity Workshop has snuck up on me once again. It starts Monday, with the bulk of the sessions happening Tuesday and Wednesday morning. I'm planning to talk with people, among other things, about personal web discovery and a project @bradfitz has started to implement the core bits.

2009/04/28

Personal Web Discovery (aka Webfinger)

There's a particular discovery problem for open and distributed protocols such as OpenID, OAuth, Portable Contacts, Activity Streams, and OpenSocial.  It seems like a trivial problem, but it's one of the stumbling blocks that slows mass adoption.  We need to fix it.  So first, I'm going to name it:

The Personal Web Discovery Problem:  Given a person, how do I find out what services that person uses?

This does sound trivial, doesn't it?  And it is easy as long as you're service-centric; if you're building on top of social network X, there is no discovery problem, or at least only a trivial one that can be solved with proprietary APIs.  But what if you want to build on top of X,Y, and Z?  Well, you write code to make the user log in to each one so you can call those proprietary APIs... which means the user has to tell you their identity (and probably password) on each one... and the user has already clicked the Back button because this is complicated and annoying.

This is also the cause of the "NASCAR Effect" that is plaguing OpenID UIs today -- you are faced with a Hobson's choice of making the user figure out what their OpenID is on their favorite provider, or figuring it out for them by making them click on a simple button... on an ever-growing array of buttons to cover all of your top identity providers and your business partners.  So the UI is more complicated than simple username/password.  This is not a recipe for success.

Next, there's the sharing problem -- if I want to share my calendar with someone, how does my software know what calendaring service my friend uses?  Again, if we're both on the same calendar service, then we're fine; otherwise we're in the situation that email was in decades ago, where you had to figure out the bang-path hop to hop address to reach your intended recipient.  (Note that in this case, the service being discovered is for a user who isn't even present.)

Finally, what is a person on the web?  At the moment we can represent a person as a URL (OpenID) or as an email address (most everybody).  A huge adoption issue for OpenID is the lack of a standard for using an email address as an OpenID.  The lack of such a standard is due to email address privacy concerns, and lack of discovery services for email addresses.  The horse has mostly left the barn on email address privacy already, as everyone uses email addresses for logins, and we just need to be careful about not publishing them publicly.  Discovery is now a solved problem, but the news isn't widely distributed yet.

Last week, over bacon and coffee at Social Web Foo Camp, Blaine, Breno, and I realized that all of the pieces are in place to solve these problems, and that they just need to be hooked up the right way, and threw together a last minute session Sunday morning to talk about it.  Here's my take-away:

Personal Web Discovery Puzzle Piece #1: URLs are people, and so are email addresses.


We allow email addresses anywhere an end user would use an OpenID -- from an end user's point of view, they can use an existing email address as an OpenID.  While we're at it, we allow any sufficiently well formed and discoverable string to function as an OpenID, for example Jabber IDs.  This means that a user can use any login ID as an OpenID, and also that if I know someone's email address from their business card, I share things like my calendar with them (without sending email).  Of course this requires discovery via email addresses to make OpenID work; fortunately that's the second puzzle piece.

Personal Web Discovery Puzzle Piece #2: The new discovery spec is here!


draft-hammer-discovery-03 is hot off the virtual presses this month; section 4.4, The Host Metadata Document, describes the basic piece needed for discovery, but in that spec it's difficult to see how this fits in with puzzle piece #1.  Here's how:  If I provide email addresses at example.com, while redirecting HTTP requests from example.com to www.example.com, I publish a text file at http://www.example.com/host-meta, which contains a line like this one:
Link-Pattern: <http://meta.example.org/?q={%uri}>; 
    rel="describedby";type="application/xrd+xml"
This means "take the thing you're asking about in URI form -- e.g., mailto:joe@example.com -- stick it in the query parameter to the meta.example.org service, and do a GET on that to retrieve a bunch of metadata about joe@example.com".  The metadata format XRD is itself a simplification of the existing metadata used by OpenID and OAuth today, and it's basically typed links based on URLs.  It maps joe@example.com to the appropriate OpenID provider to be used -- and that itself can be editable, so Joe can choose to use any provider he or she wishes.

So with a bit of swizzling, clients can map from joe@example.com to see if it's usable as an OpenID and if so, where to send the user to log in.  This eliminates the NASCAR effect.  It also means that clients such as web browsers can check to see if the user has a usable OpenID already (it probably has the users' email address from form fill already) and can present a very simple chrome-based "Log in as joe@example.com" on any web site that allows OpenID.  As a nice side effect, we also make the whole system much more phishing-resistant.

But authentication is just one service.  What if I want to provide a way for people to get my public activitity stream, for example?  That's almost trivial; just map joe@example.com to the default activity stream, and _that_ stream is a public Activity Stream feed.  I can also link to my blog and its feeds, my photo stream, my calendar, my address book, etc.  It's a user-centric web of services, tied together by a single identifier and discovery.

What about privacy?

All of the basic discovery use cases don't require any real authentication or security beyond that provided by HTTP(S).  The services pointed at can of course require authentication -- if I publish a calendar endpoint, that doesn't mean I let just anyone see it; or I may make my free/busy times public but my details may be ACL'd.  The process of discovering that a resource is ACL'd and how to go about authenticating so as to get access is just OAuth (or rather, a usage of the draft-hammer-discovery spec that uses types and endpoints specific to OAuth).  So it's discovery all the way down, and it's possible to mix in as much or as little privacy protection as is needed in each case.  The nice thing is that everybody is already standardizing on OAuth.

Sounds nice, but how does this metadata get created?  Out of thin air?

So we have standards ready to go, and could start writing client libraries today.  But where will all of this metadata come from?  What will motivate identity providers to publish this data, and how can we ensure that they allow users to configure it and not lock them in to the providers' own services?

There are several answers.  First, this spec provides more value to an email address -- so email providers have an incentive to provide it.  It's fairly trivial for them to do at least the basics; publish a static file off their main (or www) site, and provide a basic mapping service to point at whatever they have or know already that's public.  So the cost is low, and the potential benefits are high -- and once one email provider does this, it provides more incentive for the others to follow.

Second, some of the metadata is already present; every Yahoo! and Google user already has an OpenID service but none of them know it yet.  So there is value in just hooking up what's automatically provided.  However, this does lead to the danger of lock-in -- it's fine to default to your own service, but you shouldn't be limited to that service and you should also be able to override the defaults, ideally without needing to go and configure boring settings pages.  Profile pages are a valuable source of discovery data here if profile providers allow linking to services elsewhere.

Going Meta

There is another way to bootstrap.  Once you have a personal web discovery metadata service, and a way to edit per-user data, you can also create a personal web update service.  So then if you're at Flickr, and Flickr knows your email address, Flickr can find out, via discovery, if it can update your personal web data; and if so, offer to add itself as a photo stream service.  This would be done via OAuth of course, with your permission.  So services themselves could take care of the grungy work of adding links to your personal web.

Next Steps

Next steps are to get this documented properly, in the form of a HOWTO and running example code and some solid client libraries.  These are worth a million words of spec.

--
NB: You'll notice in general that there's no brilliant new idea here; this is just putting pieces that already exist together.  In fact, much of this is a re-invention of Liberty WSF discovery, but less SOAP-y and more deployable.

2009/04/18

Positive Feedback in Social Search

One suggestion from today's social search session at #swfoo was to send queries off to both search engines and your friends (e.g., "vacations in Venice").  A problem here is that many of your friends are incompetent about vacations in Venice, so sending them this both spams them and decreases results relevancy -- noise increases linearly with overall size of system.  This is why the good results that early adopters with 20K followers have with "what's the best pizza in Sebastopol" aren't scalable.

But, there's a nice solution to this I think.  As you do get results that are somewhat relevant from friends, you click through on their answers.  Your clicks tell the system that friend's answer was relevant in context, allowing it to learn which friends are competent in various fields.  Combine these results across everyone who is asking questions of the same friends to cancel out bias; you're left with a vector of weights for each person in the network, one weight per field of expertise.  Use this to do a few things:
  • Explicit reputation for people who answer, to accompany the implicit social debt incurred
  • Rank their answers higher in search results -- in many cases beating out traditional search engines if they're proved to be less competent
  • Don't spam incompetent people with questions they can't answer
  • Potentially, reach beyond your immediate social network to find the real experts on the subjects and send your question to them.
This is much more scalable than trying to categorize your friends explicitly as experts in various areas.  You'll still do this implicitly, by first clicking on results from friends you already know to be expert, helping to bootstrap the system.  But you never need to know you're doing this; the system learns automatically.

Social Web Foo: Standards for Public Social Web

Small but useful #swfoo session.  My idea was to try to give public social data formats, protocols, and standards some quality time, since (a) privacy and ACLs introduce many difficult problems that eat up lots of discussion time; and (b) there are many key use cases that are totally public, and might be easily solvable if we remove the distraction of privacy controls. @niall, @dewitt, and @steveganz attended, but per Foo rules, I won't attribute specific quotes.

Examples of this include public blogs, update streams, and feeds; and public following/friending relationships.  Typically following (one way) seems to be more likely to be public than friending, for social reasons.

Some random notes:  
  • Public content, once published, should be assumed to be "in the wild" everywhere, indefinitely, until the heat death of the universe.
  • PubSubHubHub (prior session) is a great example of a proposed open standard for improving the performance of public social data.
  • Problem:  How does an author prove authorship of data that's "in the wild" or syndicated?  Conversely, how do readers determine authenticity of an authorship claim?
  • Blogger's import/export facility currently "wrings the identity" out of the data, because we don't have any way to detect tampering with the supposed author/post/comment data between export and import.
  • There was a suggestion that signing a subset of fields in an Atom entry with Google's public key could provide authorship attestation for that data (content, title, author, etc.), in UTF-8 only, which would then let us solve the import/export and syndication attribution problems without having to deal with DigSig.
  • Great example of a situation where a hosting web site needed attestation from a chain of 3 parties before allowing possibly copyright-infringing content to be uploaded; no standard exists for doing this online.
  • Would like to be able to link to a real world identity (vouched for) or to at least a profile provided by someone like Google; there are lots of pieces of data that would let Google vouch for identity of a profile owner, but no standard way to express this publicly.
  • Google for example could also do more general reputation which could also be public.
  • A public social graph consisting of following relationships is both useful, and potentially honestly mine-able, assuming users opted in with full knowledge that data was public and mine-able; this is very different from private relationships.
  • Public social graph is also potentially a way to determine public reputation; it's possible to game this, but difficult especially if the relationships are publicly visible on the open web so that subverting them believably would take months or years of stealth work.
  • Being able to verify past employment, educational credentials, etc. (data that a user chooses to make public and verifiable) would be very useful.

Deep Thought at Social Web Foo

Not mine; these guys:

2009/04/07

Happy Birthday, RFC!

40 years ago today, the RFC (Request For Comment) was born -- RFC 1, "Host Software", was written April 7, 1969. Steve Crocker, the author, described its genesis in an op-ed piece for the New York Times. The humble RFC system is the basis for the entire infrastructure of the Web; it's amazing how far rough consensus and running code will get you.

2009/03/30

Goin' to IIW 2009a

I'll be at the Internet Identity Workshop May 18-20 and will be conversing about social identity and presence on the 'net... or whatever hot topics arise during the conference, which is an unconference, so you can decide what it's about in real time.  And, if you sign up by tomorrow, there's even an early bird special.  Go for it!http://www.internetidentityworkshop.com/?page_id=3

2009/03/25

Being stalked by companies on Twitter

Great!  I'm now being followed (stalked?) by companies:

Hi, John Panzer (jpanzer).
AL (TonyAlba_Pizza) is now following your updates on Twitter.

Check out AL's profile here:
 http://twitter.com/TonyAlba_Pizza


You may follow AL as well by clicking on the "follow" button.
Best,
Twitter

2009/03/16

URLs are People Too, Even on Facebook

s/vanity/friendly/.  Though actually getting a friendly profile URL seems to require some mojo at the moment.

2009/03/15

Exploring Drafty

1. Jason Shellen: Putting an exclusive, first 100 people only invite code into a tweet is awesome marketing.
2. Uses OAuth to access my Blogger data; yes!.  I probably would have trusted Jason with my password anyhow, but it's really good not to have to.
3. Thirty second impression: Looks like a way to generate and disseminate conversations.  I think it needs a social network component for exploration (yes, everyone is twittering about NCAA -- I just don't care).

2009/03/12

The Awesome Turbo Plane Car

...which my son threw together this weekend. I am given to understand that it has a turbofan engine.

2009/03/03

What is the Social Internetwork?

Way back when, before the Internet, there were a bunch of different computer networks that didn't talk to each other.  The situation:

"For each of these three terminals, I had three different sets of user commands. So if I was talking online with someone at S.D.C. and I wanted to talk to someone I knew at Berkeley or M.I.T. about this, I had to get up from the S.D.C. terminal, go over and log into the other terminal and get in touch with them. [...] I said, it's obvious what to do (But I don't want to do it): If you have these three terminals, there ought to be one terminal that goes anywhere you want to go where you have interactive computing. That idea is the ARPAnet."

Robert W. Taylor, co-writer with Licklider of "The Computer as a Communications Device", in an interview with the New York Times, via Wikipedia.
(And ARPAnet begat the Internet, which begat the World Wide Web, which begat Web 2.0.)

We're in a parallel situation today in online social networks. There are a bunch of them, they don't really interoperate, and it's obvious what to do -- there ought to be one network, yours, that goes with you anywhere you have an online social context.

OpenSocial and Google Friend Connect provide an equivalent of the routers and gateways that connected disparate digital networks in the seventies, creating a network of networks -- the Social Internetwork.

2009/03/02

The Social Bar on Blogger

I've just added the Google Friend Connect Social Bar to the bottom of Abstractioneer; as with the Demo gadget, this involved copying and pasting GFC code and substituting the Abstractioneer site ID so it hooks up correctly. Try it out!

2009/02/28

The OpenSocial API on Blogger

The Google Friend Connect integration also brings the OpenSocial API to Blogger -- not just the Gadgets APIs, which Blogger has had for a while, but also the social APIs. Let's see how the Friend Connect demo gadget (which calls the OpenSocial APIs) works within this blog post:







In Friend Connect, the followers of this blog show up as "friends" of the OWNER, which is the blog (Abstractioneer). I show up as an administrator of the site, as does the Blogger service. And, my friends-on-Abstractioneer show up as my friends when I sign in.

(It's amusing how social Blogger is becoming -- not only is a blog your friend, Blogger itself is helping to manage your friend's affairs.)

2009/02/27

Building Out the Social Internetwork

It's been a busy week unifying Blogger Following and Google Friend Connect, so not a lot of time for blogging. A great thing about Friend Connect is that it's a catalyst for millions of individual social contexts (web pages). The contexts are separable but not totally disjoint -- you can choose to leverage and include your existing social networks and social communication tools. (The last thing we need is another social network.)

This is a little bit like real life, where you have different social contexts without totally disconnected social networks.

A practical advantage of the unification is the ability to create OpenSocial gadgets that work well on both Blogger blogs and Google Friend Connect sites. Stay tuned...

2009/02/11

Sorry Son, I Can't Read to You Tonight...

...unless you pony up some cash to purchase the audio rights:
"They don't have the right to read a book out loud," said Paul Aiken, executive director of the Authors Guild. "That's an audio right, which is derivative under copyright law." -- WSJ
In addition to the millions of felonious parents, the AFB might have a word or two to say about this.

2009/02/10

The Toaster Project

Synopsis: Make a toaster... from the ground up, with materials you manufacture yourself. Highly educational. Has an associated blog, naturally. Definitely making a Statement. More backstory here: http://www.we-make-money-not-art.com/archives/2009/02/-thomas-thwaites-the-toaster.php.