Content Representation With A Twist

Thursday, October 25, 2007

text/editor auto-completion as a possible real world application for MOM

Right now, I am using my secondary workplace PC. At this one, I am used to use it one-handedly. And let the auto-completion kick in.

In a recent blog posting somewhere else, I was discussing lectures, lecturers, discussing as a topic, and the next issue I moved to was seminars. Intuitively, I expected, the auto-completion would kick in and offer "seminars" -- which it didn't.

I pondered whether to file a feature request, suggesting to background-use a thesaurus -- a word-processing one, not necessarily a real one -- to predict the words one might most-likely use soon. -- Then, I nticed, traditional term ordering systems like e.g. thesauri might have a hard time to do so; even more the programmers who actually should implement such kind of tool... well, on the second glance, maybe brute force could help there, and as a text is a relatively small amount of data (and vocabularies even more small), might be doable, easily to implement.

The brute force approach could pick up, stem the words of the text, then follow all the relation edges of a term to its set of neighbours, collect them, order them by alphabet, consider them like the words appearing really i the text: offer them for auto-completion where it looks appropriately.

On the other hand, a MOM approach might be to consider the words of the already typed-in text, step back a step, see the features of the items of the terms, count which other item(s) count the most features the until-now mentioned ones feature too. That way, we additionally would get a ranking of probability of upcoming terms. ... I'd do that myself, but the issue on tasks like this remans the old one: Where to get such interrelated collections of words in a reasonable amount and for reasonable .. no cost at all?

      
Updates:
none so far

Tuesday, October 23, 2007

adjusting the direction of this blog: blogging on current neuro issues

I've been working on the Model of Meaning "ideas conglomerate" since more than seven years now. The first question I count to be part of that system of ideas I asked in summer 2000, during a more or less boring lesson on some economics subject.

Unfortunately, I picked up the issue before I became introduced into the methodology of working scientifically. So what I figured out, what I read, what I observed, perceived went into a big mix-up. Which brought me into some trouble: Since I apprehend several issues of behaviour, perception, neurology/thinking each a while before someone else published their papers on the issue -- I read about them in a popular science magazine -- I strongly believe, I am right with my course throught the complex. However, I started without sticking to scientific methods, but I figured out things. -- To gain the reputation ("credits"), I thought I should get for that work, I had to put the whole building of what I've figured out onto a new, stable, scientific foundation. But the same time, I already felt unable to differenciate between what I figured out by myself and what I learnt from any external source: What someone was telling might or might not imply what I figured out already. How to make sure, they and me meant, implied the same?

To prove, I were right, I thought the better opportunity would be to just implement the whole idea as a piece of software -- that is what you know by MOM today.

However, as I am unemployed currently, I became really distracted from the MOM project. And involved in more professional blogging. Which continuously carries along the question, how to increase one's reputation.

Now, I was reading a posting of a not so reliable popular science [kind of] blog on sleep deprivation, how it'd affect rational thinking. As sleep is a topic I touched by MOM several times, I was interested in verifying whether or not the "blog" was re-narrating correctly. As CiteSeer seems to be down, currently, I launched Google Scholar with a demand for articles of Seung-Schik Yoo for 2007. (In the hope to get the article.) However, accidentally, I found A deficit in the ability to form new human memories without sleep by the same person (co-author), published in February 2007. Which nudged me even further to my insights gained by MOM. -- As I am currently experiencing a regular visitor from Korea on this MOM blog, I thought it might be worth a shot to start just blogging about MOM -- even if I don't have any scientific reputation in that field of topic.

That's why you are reading this posting here.

The impulse was, I might gain and convince some audience, maybe even gain some reputation in this field of topic, despite not any scientific one. However, I think, it might become some fun to comment on what's going on in this area, even without any scientific degree here.

Additionally, I am interested in perception, usability, comprehensibility, everything that has anything to do with mind and memory. But one thing, I am not interested in. That is artificial intelligence. When I touched intelligence any time in the past, it was a by-product at all.

Whatever. Let's see whether or not it'd actually blog on it...

      
Updates:
none so far

Tuesday, October 09, 2007

on using tags in file system

Stumbled upon, but not yet read. It`s a 2005 blog entry of anyone on using tags in file systems.

      
Updates:
none so far