danny.ayers at gmail.com
Fri Oct 28 05:08:55 PDT 2005
On 10/27/05, Kevin Marks <kmarks at technorati.com> wrote:
> On Oct 26, 2005, at 4:54 PM, Rohit Khare wrote:
> > Moral: don't try to encode machine-readable (and machine-actionable)
> > moral judgments. hReview is much more innocuous because it's primarily
> > human-readable today, not the basis of a robot censor.
> I wrote about why technology should be amoral yesterday:
> Human readable is good.
Indeed. But it seems there may be a broader issue when we're talking
of making data machine-readable too (hence repurposeable, remixable
etc). Everytime we refer to someone else's resources, even through
simple linking, we are in a sense using their data. Although the "deep
linking requires permission" silllines seems to have passed over,
there are probably more hurdles on the horizon.
Not long ago the Plink service, essentially a FOAF
aggregator/queryable DB, was pulled (see ). People didn't like
their names showing up where they didn't expect it. Maybe simple
reviews are below the sensitivity threshold, but any tool using XFN
data could trigger the same reaction.
> > IMHO: Let folks tag things, and let the terms emerge (folksonomically),
> > rather than diving into yet another top-down taxonomy rathole.
> This makes sense, definitely.
Yep, sure. But this is more to do with *how* things are said, rather
than *what* is being said. Still potentially a minefield for morality
There other related situations. VeriSign appear to be planning a
blogspam-free news aggregator. The call on what is spam may be
relatively amoral, but not far from there is censorship based on the
content - adult only, or even politically correct (in the most general
My hope would be that by making more data explicit, filtering can at
least be more accurate, and demand from Web users in general will
favour non-prejudiced sources.
More information about the microformats-discuss