Content Representation With A Twist

Showing posts with label complex content built out of simpler pieces. Show all posts
Showing posts with label complex content built out of simpler pieces. Show all posts

Wednesday, February 21, 2007

About the Simple Set Core

The Simple Set Core project is about a set engine. Aim of this set engine is to recognize ("identify") items -- sets of features -- by only some of their features, to store these items and their features recursively as directed graphs, and to reorganize these graphs so that as well implicit items/features become visible as the graph as a whole becomes less dense, thus more easy to handle.

Background

The set engine is part of a larger project, the Model of Meaning. Its approach is that notions ("meanings") consist of smaller notions (or raw data like "light sensed"). Different but common approaches, the Model of Meaning drops the familiar "is a" relationship between things: The assumption of the model is that, despite a car is a kind of a vehicle, a vehicle is a part of the car. -- At first glance this is hard to comprehend, but isn't it so that you are just thinking of the car and the vehicle? Then, both are not physical, thus there is no physical problem of "cramming" a just imagined vehicle within a car. Which also is just imagination.

Benefit For The Web

Having the Model of Meaning in mind, the set engine alone can do the web a big service: If tags would be related to other tags by "is part of" relationships, first, the tagging folks could quit to mention implications, second, the people searching for content could get the matching content even when looking for low level implications of the very content.

Perspective

Also, if there is a way to recognize items by just parts of their features, tag graphs could be integrated with each other, automatically, simply because the items mentioned in the graphs could be recognized by their features also; and, dropping the tags, replacing them by notion identifiers ("IDs") where the tags become attached to, maybe that could overcome even the language barrier, once and for all.

      
Updates:
none so far

If word processors could know the words meanings...

If complex content get be built out of atomic pieces (of content?), how should the complex content become clear, ever?

I think about a word processor. The whole document becomes built out of words, and these out of letters. There is no machine understandable content attached to the words; the words remain just strings of letters. The machine has no idea about the meaning of these strings of letters nor of the sequences of words forming sentences and the whole document.

If at least the words had concepts attached, for a machine it might become much more simple to figure out the content of the whole document. Also since the words theirselves are somewhat related to each other by the underlaying grammar.

To get at least the content of the words accessible, an approach could be to integrate a request for explanation mechanism into the spell checker framework: If the content of the word typed-in is unknown, the user should explain it.

      
Updates:
none so far