Working backwards on the Semantic Web
I just had a bit of an insight: I think that many people, including myself, may be taking the wrong approach to working on the Semantic Web. I think that dealing with XML serializations of RDF, RDFS, and OWL is just plain wrong. Tools like Protege offer a frame-like UI that makes it a lot easier to work with (and free descriptive logic engines like Fact++ help by checking for consistencies). However...
I have had a little free time today to work on a pet project that Obie inspired: write Ruby wrapper code for making it easier to deal with RDF/RDFS/OWL by loading files and automatically mirroring classes, etc. I would work in Protege, then write Ruby code to consume the RDF/RDFS/OWL files so that I could work in a decent language. OK, fine.
However, this all still seems more than a little wrong to me. Since the Semantic Web is largely about ontologies and knowledge representation, why turn our backs on decades of AI research? Why not work with knowledge representation systems written in Lisp (or Prolog, Ruby, etc.) and have a back end that serializes to XML/RDF/RDFS/OWL as required. Really, use the best notation possible for all of the human-intensive work.
While Protege is a terrific tool, I still think that using older technologies like KEE, Loom, PowerLoom, etc. with optimal programming environments makes a lot of sense. Any language with good introspection (like Ruby or Common Lisp) would work for supporting XML serialization when required.