The MOM Simple Set Core (MOM SSC) is the most recent implementation of MOM. MOM is a trinity of research, development and a project driving both of them ahead. In core, MOM is the Model of Meaning plus research based on that model, aiming at representing every kind of content bare of words and tagging, only based on graphs and bare input sensors, such as 'light given', 'oxygene here', 'soft ground'. -- However, since 'there is a red light under that passenger seat, calmly blinking' is a bit more complex content, and that content is not yet developed by graph, currently MOM accepts crutches -- labels or pieces of software that signal a certain event being given, e.g 'web browser cannot render that page correctly'. As MOM improves, such crutches shall get replaced by the more flexible (and error resistant) representation of content as offered by MOM.
There are several promised benefits of that. Getting content available without words implies the the chance to render content to any language of the world. Getting there without tagging implies the chance that the machine knows of the content represented -- instead of just dealing with it but remaining unaware of what it means. That in turn implies the chance to load content ("knowledge") into any sort of machines, such as traffic lights or vacuum cleaners or cars. Whereby to load the knowledge might be much a bit quicker but needing to train any sort of neuronal network AI. -- MOM is not after implementing any sorts of artificial intelligence but heads for getting the content available. Call it a [content-addressable] memory.
That error resistant representation of content beforementioned originates from another core part of MOM, the recognition. -- Yes, that's right. MOM found recognition to be a part of memory, not of any sorts of intelligence. It's an automatic process which, however, might be supportable by training [link: "is it learning?"]: weighting the graph's edges. [It's clear to me that humans can improve their recognition, but I am not sure whether the causes of learning equal those of improving the recognition abilities of a MOM net, hence the differentiation.] Core of MOM's recognition and cause for its error resistance is that while the MOM net defines every possible feature of an item, for recognition not every such one must be given, only a few. -- Which, by the way, matches a claim recently posted by Chris Chatham: Only a few of the features of a known item result in a correct recognition of that item because there are only the yet known items out there: To discern all the items being similar, you don't need that many different features. But wait the day you encounter an in fact new item! -- You'd get it wrong, in any case. Remember the days you were familar to dogs as the only kind of pet animals? Then, encountering the first pet cat, you likely named it 'dog', din't you? Same so for any kind of flip pictures, like the one you can either see a beautiful young woman in or a rather old one. -- To get back to Chatham: On the issue of change blindness he claimed "[...] the brain is 'offloading' its memory requirements to the environment in which it exists: why bother remembering the location of objects when a quick glance will suffice?"
Along with research, MOM is a project of development. I am used to program, hence cast MOM into software is the most clear way to go. MOM, casted to software, allows for verifying the model. Also, over time, a full implementation of MOM might result, hence achieve to get handy all the chances MOM offers.
For example, the MOM Simple Set Core (MOM SSC) originally was only after implementing the MOM net, i.e. the functionality to maintain (parts of) a MOM net in computer memory (RAM). That's overcome now. Now, going further ahead, MOM SSC aims at implementing the reorganizer. That's a share of MOM which shrinks the graph by kepping the same content -- yet even revealing content which was only implicit beforehand.
Former versions of MOM parts were implemented using Perl. For reasons of readability, for MOM SSC, Ruby was chosen. Since the theoretical work on the reorganizer it was clear, the reorganizer modifies the MOM net, hence challenges the strengths of the recognizer. To get able to make the recognizer perform well even on reorganized MOM nets, I now begun to implement the reorganizer. Having it in place, research on the recognition might go into depth. Especially since having a reorganizer in place implies to get enabled to automatically test quality of recognition: Recognition on the reorganized net should provide the same results as recognition performed on the original net. Fine part is, neither reorganization nor recognition need any labels for the nodes (i.e.: no mark-up/tagging).
Upcoming milestone of the MOM SSC sub-project might be to implement the core of the reorganizer, accompanied by full duck typing approach for the MOM SSC classes, or/and by fixing all the chances for improvement, which accumulated over time since the beginnings of MOM SSC. -- Core of the reorganizer is to detect and replace sub-networks of the MOM graph that occupy (far) more nodes/edges than necessary to represent a piece of content. The replace would be to reduce these sub-networks to just as many nodes/edges as actually needed to represent the content.
Updates: none so far
Content Representation With A Twist
Showing posts with label Chris Chatham. Show all posts
Showing posts with label Chris Chatham. Show all posts
Thursday, June 28, 2007
Friday, June 22, 2007
"big wet transistors" and "spaghetti wiring"
Doing the homework I caused myself, now have to do sent me back to the "10 Important Differences Between Brains and Computers" article of Chris Chatham I cross-read earlier today. His article reader Jonathan points out several weaknesses in Chris Chatham's argumentation. Although I consider him mostly right with his objections, I consider it nitpicking, mostly. In the end, I don't see the point he's about to make. Jonathan's arguing "[...] there must be some level of modularity occurring in the brain. My gut instinct is telling me here that a brain based completely on spaghetti wiring just wouldn't work very well..." obviously takes not into consideration that the single neurons themselves might be the entities of the brain that do the processing and that do constitute memory -- memory and processing in once. On this point, I am far from his arguments.
Another interesting point the reader Kurt van Etten puts into the round: "[...] (I do think a lot of writers equate neurons with big wet transistors)," -- Hm, I learned electrotechnics when in IT support assistant education, and every now and then I ponder about how to cast MOM nodes into hardware, but when doing so, I primarily think of the content storable by such a node. That I might make use of a transistor for that is a negligibility. -- I didn't think so far yet, but I don't presume to cast a MOM node into hardware to utilize transistors might be the only way. Anyway, interesting to learn like what the major part of people occupied by the topic might imagine a single neuron. ... Right now, I think that imagination might be a bit too simplified and doing so might lack this or that important property of a real neuron, hence anyone reducing their imagination of a single neuron to that simplicity might miss this or that important condition or might fail to get this or that insight, just because of a too restricted ("simplified") look at the matter.
... Well, I got up to comment number 18, but that one might need some deeper consideration. Hence, now I make a break and might continue with pondering that #18 comment later.
During reading the comments I opened some more links provided there, mostly by the commenters` names linking to these sites:
Updates: 20070623.12-42h CEST: added a headline to the posting
Another interesting point the reader Kurt van Etten puts into the round: "[...] (I do think a lot of writers equate neurons with big wet transistors)," -- Hm, I learned electrotechnics when in IT support assistant education, and every now and then I ponder about how to cast MOM nodes into hardware, but when doing so, I primarily think of the content storable by such a node. That I might make use of a transistor for that is a negligibility. -- I didn't think so far yet, but I don't presume to cast a MOM node into hardware to utilize transistors might be the only way. Anyway, interesting to learn like what the major part of people occupied by the topic might imagine a single neuron. ... Right now, I think that imagination might be a bit too simplified and doing so might lack this or that important property of a real neuron, hence anyone reducing their imagination of a single neuron to that simplicity might miss this or that important condition or might fail to get this or that insight, just because of a too restricted ("simplified") look at the matter.
... Well, I got up to comment number 18, but that one might need some deeper consideration. Hence, now I make a break and might continue with pondering that #18 comment later.
During reading the comments I opened some more links provided there, mostly by the commenters` names linking to these sites:
- Kurama's Secret Lab -- Blog destinado à discussão científica., looks Portuguese to me, which might cause me a hard time reading through it. However, there is a babelfish around, and also there might be this or that English posting amongst the others.
- Learning Computation -- A chronicle of one person's attempt to learn the theory of computation and related subjects.
- Greedy, Greedy Algorithms -- Talk of computation, mathematics, science, politics and all the associated philosophy from two guys with aspirations in the world of math. The current top postings of that blog don't look actually to be algorithm related at all, but interesting anyways. Information visualization, information freedom, and politics involved. However, probably not leading any further in the MOM issue.
Updates: 20070623.12-42h CEST: added a headline to the posting
Some articles on content representation
Since nearing the first mile stone of MOM SSC, I thought, it might make some sense to connect to others occupied with content representation. I technoratied for "content representation" (including the quotation marks) and found several postings aparently totally irrelated to content representation. Also, today "content representation" seems to primarily mean "mark up", e.g. by terms provided by a thesaurus or the like. However, I found one attracting me, pointing to another one which in turn pointed me to 10 Important Differences Between Brains and Computers by Chris Chatham, posted on March 27, 2007.
Number one of his list of differences is nothing new -- "Brains are analogue; computers are digital", therefore skipped.
Number two reveals a new buzz word for describing MOM, "content-addressable memory", and it describes it as follows: "the brain uses content-addressable memory, such that information can be accessed in memory through " [...] spreading activation" from closely related concepts. [...]" When I read it first, I thought, oh, there might be someone with a similar concept in mind like MOM. On the second look, I realized, that claim likely originates just from psychology. The review continues the above quote by "For example, thinking of the word 'fox' may automatically spread activation [...]" which points a bit into neurology. I wonder who that claim "thinking of a word" or "thinking of a fox" or even "thinking of the word 'fox'" can be proven to spin off activation. I mean, that would imply someone proved "the word 'fox'" and a neuron equal, since the neuron is that instance sending activation to other neurons. -- However, I share that opinion, the one a neuron represents an item, but I am just not aware of a proof for that. If you have such a one at hand, I'd be really glad if you could hint me to the source. (Just since it'd support my own claims.)
Aside, I don't share the idea thinking of a word might immediately stimulate "memories related to other clever animals" [as my source, the above linked article, continues] related content. I think, at least it needs to think of the fox itself instead of just the word "fox". And, to finish the quoted sentence, it ends in "fox-hunting horseback riders, or attractive members of the opposite sex."
Back to MOM, taking "content-addressable memory" as a label for it, actually is chosen accordingly: Chris Chatham continues his second difference with a "The end result is that your brain has a kind of 'built-in Google,' in which just a few cues (key words) are enough to cause a full memory to be retrieved." Well, that's exactly what MOM is after: To pick up matching "memories" by just a few cues. -- The way Chris Chatham is describing the issue is pretty close to the original issue that led me to figuring out MOM: A guy who got his heater damaged who must find the spare part by utilizing a thesaurus. The thesaurus mostly consists of abstraction relationships between item names listed there. And rather often, there is no definition for the items provided -- thesaurus makers seem to presume you're a specialist on that field of topic or you wouldn't make use of a thesaurus at all. However, restricted to that tool, if that tool is restricted to abstraction relationships mainly, you cannot find the part you need to repair the heater. But what if you'd remove all the is a (i.e. abstraction) relationships and set up a "kind of thesaurus" consisting of has a relationships only? -- That way, you'd find the spare part as quickly as your in-mind "Google" might do. At least if you've got another tool at hand that jumps you over all the crap of temporarily unnecessary details, like the knowledge that -- let's switch the example to a pet cat -- the four feet, torso, tail, neck and head that belong to the cat also belong to any quadruped animal. Such as a pet dog, or also a pet hamster.
With differences #3–#9 I were familiar with respectively became clear to me over the time I developed the Model of Meaning, e.g. the claim provided by "Difference # 6: No hardware/software distinction can be made with respect to the brain or mind". That's rather clear, but I am not going to explain it here, since this posting is just a note to me (and anyone who might be interested), that there is a posting around which by content is close to MOM.
Difference #10, on the first glance looked unfamiliar to me -- "Brains have bodies" --, but although I wasn't aware of those change blindness findings "that our visual memories are actually quite sparse" quickly brought me back to what I already know (well, strongly believe; I lack the laboratories to proove my theoretic findings by scissoring mice). It's rather clear that "the brain is 'offloading' its memory requirements to the environment in which it exists: why bother remembering the location of objects when a quick glance will suffice?"
Updates: none so far
Number one of his list of differences is nothing new -- "Brains are analogue; computers are digital", therefore skipped.
Number two reveals a new buzz word for describing MOM, "content-addressable memory", and it describes it as follows: "the brain uses content-addressable memory, such that information can be accessed in memory through " [...] spreading activation" from closely related concepts. [...]" When I read it first, I thought, oh, there might be someone with a similar concept in mind like MOM. On the second look, I realized, that claim likely originates just from psychology. The review continues the above quote by "For example, thinking of the word 'fox' may automatically spread activation [...]" which points a bit into neurology. I wonder who that claim "thinking of a word" or "thinking of a fox" or even "thinking of the word 'fox'" can be proven to spin off activation. I mean, that would imply someone proved "the word 'fox'" and a neuron equal, since the neuron is that instance sending activation to other neurons. -- However, I share that opinion, the one a neuron represents an item, but I am just not aware of a proof for that. If you have such a one at hand, I'd be really glad if you could hint me to the source. (Just since it'd support my own claims.)
Aside, I don't share the idea thinking of a word might immediately stimulate "memories related to other clever animals" [as my source, the above linked article, continues] related content. I think, at least it needs to think of the fox itself instead of just the word "fox". And, to finish the quoted sentence, it ends in "fox-hunting horseback riders, or attractive members of the opposite sex."
Back to MOM, taking "content-addressable memory" as a label for it, actually is chosen accordingly: Chris Chatham continues his second difference with a "The end result is that your brain has a kind of 'built-in Google,' in which just a few cues (key words) are enough to cause a full memory to be retrieved." Well, that's exactly what MOM is after: To pick up matching "memories" by just a few cues. -- The way Chris Chatham is describing the issue is pretty close to the original issue that led me to figuring out MOM: A guy who got his heater damaged who must find the spare part by utilizing a thesaurus. The thesaurus mostly consists of abstraction relationships between item names listed there. And rather often, there is no definition for the items provided -- thesaurus makers seem to presume you're a specialist on that field of topic or you wouldn't make use of a thesaurus at all. However, restricted to that tool, if that tool is restricted to abstraction relationships mainly, you cannot find the part you need to repair the heater. But what if you'd remove all the is a (i.e. abstraction) relationships and set up a "kind of thesaurus" consisting of has a relationships only? -- That way, you'd find the spare part as quickly as your in-mind "Google" might do. At least if you've got another tool at hand that jumps you over all the crap of temporarily unnecessary details, like the knowledge that -- let's switch the example to a pet cat -- the four feet, torso, tail, neck and head that belong to the cat also belong to any quadruped animal. Such as a pet dog, or also a pet hamster.
With differences #3–#9 I were familiar with respectively became clear to me over the time I developed the Model of Meaning, e.g. the claim provided by "Difference # 6: No hardware/software distinction can be made with respect to the brain or mind". That's rather clear, but I am not going to explain it here, since this posting is just a note to me (and anyone who might be interested), that there is a posting around which by content is close to MOM.
Difference #10, on the first glance looked unfamiliar to me -- "Brains have bodies" --, but although I wasn't aware of those change blindness findings "that our visual memories are actually quite sparse" quickly brought me back to what I already know (well, strongly believe; I lack the laboratories to proove my theoretic findings by scissoring mice). It's rather clear that "the brain is 'offloading' its memory requirements to the environment in which it exists: why bother remembering the location of objects when a quick glance will suffice?"
Updates: none so far
Subscribe to:
Comments (Atom)