Content Representation With A Twist

Thursday, October 23, 2008

output of a MOM net stimulation

No, the MOM project is not gone. Because of a horrible 14-hours workday (incl. 4 hours commuting), I simply lack the time to work on MOM. However, this night I found some time to twiddle around with it, developed a very simple kind-M net net generator in Ruby which features node salting, dotty output and progression over time. Below, you find a screenshot of its output. This screenshot shows three different states of the same MOM net.

The top one is almost the initial state. All nodes were set to a value of two, thus were active. Additionally, the net was salted. That means: Some nodes were randomly picked and their values were increased by a random value. That's why there are floating point numbers in the graph at all.

To keep the node shapes small but stay able to watch the values, I added helper nodes. Those display the values and link to their respective real node. Those helper nodes obviously aren't related to anything other but to those nodes they serve as labels for.

Note, despite the bright box in the bottom left corner of each graph, the screenshots show immediately consecutive states: Top is state 1, middle is state 2, bottom is state 3.
 

The center graph shows the net after one iteration. What happened to each node by now was: Every successor node (top level) got its value divided by 2.0. Every active predecessor node stimulated their successors by 1.0, every predecessor node -- active or not -- got their values divided by 2.0 too.

You may notice a color change: Nodes being still active are colored green, as well as their edges to successor nodes. Nodes not active anymore became orange now. You might also notice a different edge style now: Dotted lines means the predecessor node may have change, but at time of the screenshot it didn't effect its successor. That's for: In case some node turns green but didn't effect its successor, the dotted line makes that clear.
 

Another step in time, bottom graph, you notice the color of most of the nodes faded out furtherly. The links between label nodes and real nodes are solid as always. The values of originally unsalted nodes is down to 0.5.
 

Why this new verve? -- I just thought, it'd be a good idea to present MOM and share it with others to improve it together, rather than aiming and aiming for a perfect outcome but having that rather slowly only, because of lack of time.




Well, actually Nathan Sobo's presentation of Treetop inspired me.

      
Updates:
none so far

Saturday, May 03, 2008

Generating a basic MOM network

Just for the sake of being able to look up this later. As I am doing this over and over again in several programming languages I apply to get forth with MOM.

I need a graph of at least two distinct layers of nodes (vertices), connected by edges (arcs). The graph may not contain any loops. Therefore I select a number of bottom layer nodes -- the rest is going to be non-bottom layer nodes. For the ease of addressing both kinds of nodes, let's say, the bottom layer nodes have IDs 1..x, the non-bottom layer nodes have ID x+1..n.

To avoid loops, each bottom layer node shall be connected to a non-bottom layer node, each non-bottom layer node shall be connected to a bottom layer node.

Though desirable, this does not imply the result of graph generation will be a single graph: The generation may result in several independent graphs. -- If you are aware of a better approach to get the graph generated, please let me know.

The connectivity of the graph might get improved by randomly adding some further edges. To avoid to cause loops here, the additional edges may be drawn only from non-bottom layer nodes to nodes with a smaller ID, including bottom layer nodes. -- Effectively, this adding of edges may cause the generated graph become multi-leveled.
 

An example for such a mini MOM net generator can be seen below. It's implemented in Pascal (using the GNU Pascal Compiler for compilation) and omits the connectivity increasing step. Though it provides a dump of the generated graph:

program MOMnetGeneration;

const
all_nodes = 50;
bottom_nodes = 2 * all_nodes DIV 5;

type
appropriate_int = shortint;
tEdges = array [1..all_nodes] of appropriate_int;


function rnd(min, max : appropriate_int) : appropriate_int;
begin
rnd := random(max - min) + min;
END;

var
edge : tEdges;
i : appropriate_int;
begin
{ connect bottom to non-bottom nodes: }
for i:=1 to bottom_nodes do
edge [i] := rnd(bottom_nodes+1, all_nodes);

{ connect non-bottom to bottom nodes: }
for i:=bottom_nodes+1 to all_nodes do
edge [i] := rnd(1, bottom_nodes);

{ dump the generated network: }
for i:=1 to all_nodes do
writeln(i, ' -> ', edge [i]);
end.


I compared the speed of Ruby and Pascal for creating 220 31-bit numbers, which took the Pascal compilate about 3 seconds and Ruby between one and two minutes. Therefore, I am back to copiled languages which work close to the hardware. Might try C/C++ as well.

      
Updates:
none so far

Sunday, January 20, 2008

Build a new 'programming' language that neither instructs computers but tells them what to make sure?

Just reading the lastest news on a security hole in Winamp, and still having in mind how our programming trainees tend to assume things and base their programming on that -- instead of making pretty quite sure --, having worked on knowledge representation a rather long time, with the Winamp issue a thought popped into my mind:

As long as there are minds out there trying to make any one's programs do anything they were not intended for -- and that might be for a pretty long time --, programming might initially look like it looked like since decades: instruct the computer what to do and in which order to do it. But on the second glance, having people in mind who try to abuse programs, other people who ease them to do so instead of making sure, all that programming sort of things, in my eyes, looks like being in conversion to be knowledge work, rather than lining up building blocks. That kind of knowledge work that is to make sure things are the way we'd assume them to be. So the whole program might become some sort of building where each single building block was not only lines up but verified too. So, then in fact the whole building consists of knowledge rather than basically of building blocks of assumptions.
 

The majority of my achievements in knowledge representation was to figure out two fundamental concepts, aside of a minor but even more fundamental one: The concept of recognition is after "How to recognize items by a given subset of their features?" while reorganization asks how to reorganize a given graph of knowledge representation to make it less matter/energy consuming while still representing the very same content? The minor one was how to store content by graphs at all. It's very basic but important nevertheless.

Long a while ago, I wondered whether there might be a reason to base any kind of computers instructing language on that effort. But then I didn't see any such reason, and I didn't take any further effort to figure out any such one.

However, coming to the point today to see secure programs as a building of certainities, there in fact might be a reason to convert my efforts into a new computers instructing language.

      
Updates:
none so far