Sunday, August 13, 2006

Hypothetical facts

If you're not into AI or cognitive science you may have a lot of trouble understanding this post. Sorry.

I'm preparing for a class I'm about to teach, and I'm teaching myself the programming language Python, because in a month I'm going to have to teach my students Python. I thought I'd implement a simple inference function to do modus ponens.

Refresher: Modus ponens is a simple inference that basically says if the following two statements are true: "If P then Q" and "P" then one can conclude "Q"

I've been thinking that much of memory is stored in the mind as facts. Short statements, each of which is true given some context, or "microtheory." In support of this notion, the Cyc database, a long-term project deticated to encoding, by hand, all (estimated) 10 million facts people have for a commensense understanding of the world, switched to facts after they found their Minskyan frame system unweildy.

For me, facts are represented as two or more concepts connected with some relation. So, for example, if a particular dog is red, say "dog2" it might get represented (dog2 has-color red-color). This means there's a concept in memory for dog2, the has-color relation, and the red-color concept. The two concepts dog2 and red-color are connected with the relation. So far so good. My dissertation AI was built like this.

So now we come to modus ponens-- I just encountered a problem I'm shocked I'd never encountered before. Let's say we want to store "If it's raining, you are wet." You might encode this as (its-raining conditional-relation you-are-wet). For AI people who scoff at the idea of its-raining being a single symbol, imagine it as a pointer to a complete fact, such as (state-of-world4527 has-weather rain).

But encoding it this way, according to my system, a symbol for (its-raining). But having this fact implicitly implies that the fact is true. That is, if there is a fact in memory its-raining, then the agent believes it is indeed raining. Why? Because I think that facts default as true. I believe that under cognitive load, if a person hears about fact x and that fact x is false, they will, given resources, remember the fact as true first, and the caveat that it's false if attentional capacity permits.

So when I encode (its-raining conditional-relation you-are-wet) in my system, I'm also stating its-raining is true, and our conclusion, you-are-wet! Not good.

One way out of this is to, when learning the conditional, attach explicit unknown truth values to the arguments of the conditional. This seems very messy to me, however. Like when a new fact comes in, the fact is checked: if the relation is conditional-relation, make the arguments of unknown truth value (unless of course those facts already exist in memory with truth values).

Maybe there's no way around it.

No comments: