[uf-discuss] microformat proposal: dependancy graphs (for software)

Derrick Lyndon Pallas derrick at pallas.us
Sat Jan 27 11:54:53 PST 2007


I'm interested in feedback on the following idea. (Right now, I'm in the 
process of developing a corpus of live examples but that's not what I'm 
asking for.)

Essentially: software is everywhere. Because it is complex, it is 
modularized. This has the effect that not every developer will be on the 
same page. Changes can happen in a kernel that bubble up through the 
standard library into a maze of support libraries. Hopefully, separation 
of concerns and good interface design have dampened negative effects but 
there can be subtle, negative consequences.

For example, a change in the standard library can break a bad practice 
in libfoo (ignoring some return value) which causes an array bounds 
problem in libbar. Since your application uses libbar and there is no 
direct connect to libfoo, how do you know that you need to upgrade to a 
newer version of libfoo? This is especially a problem if libbar doesn't 
need to be recompiled (it retains binary compatibility with libfoo) or 
if libfoo or libbar are optional.

The same problem from the other direction is directed, acyclic 
dependency graphs. If I am building a new application from scratch, how 
do I know what libraries are possible? I could go to the web page for 
ImageMagick only to learn that I need a new version of libpng. On the 
same token, libpng needs an updated version of libz. And libz doesn't 
like my old crufty compiler. Add to that the complexity that I really 
just want to install RMagick (a Ruby interface to *Magick) which can use 
either ImageMagick or GraphicsMagick (though the interface changes in 
subtle ways depending on which library you choose) and these choices 
interact with my version of Ruby, all of the modules I use in Ruby, any 
code they link in, and the compilers for that code, ad infinitum. 
Suddenly I've got 40 browser tabs open and still no graphics library.

The preceding paragraph is a (sad but) true story. Part of the problem 
had to do with the fact that I didn't own the system and the system had 
"stable" versions of packages on it, as defined by the Fedora Core team. 
Right now, there are ways to do these builds; whether you're using 
binaries (apt-get, yum, rpm) or source (emerge, srpm, *-src), you have 
to go through a clearing house. Someone took the time to compile 
binaries or repackage source trees and write down what needed what.

But the fact is all of this information is already on the homepage for 
most software. Current aggregators rely on the author(s) submitting this 
information manually. Furthermore, commercial packages don't normally 
submit product information to sites like FreshMeat, SourceForge, or any 
language-specific repository (CPAN, PEAR).

Because the information (version, dependency, package URLs, bug alerts) 
is already there (see below) it should be fairly straight-forward to 
figure out what people already do and "semantic it up."

Take a look at:
 * http://kernel.org/
 * http://libpng.org/pub/png/libpng.html
 * http://freshmeat.net/projects/libvc/
 * http://raa.ruby-lang.org/project/fcgi/
 * http://www.gentoo-portage.com/dev-lang/erlang/Dep#ptabs

Mix all of this with hCard, for authors; hReview, to help you decide 
between optionals and to detect bit-rot; and rel-license, for software 
rights issues. Suddenly we have a very powerful, automatic, user-driven 
system for keep software up to date.

~D



More information about the microformats-discuss mailing list