Thursday, August 31, 2006

How we think about treating animals

My man Daniel wrote this to me in an email, and gave me permission to blog it:

This always strikes me as a weird tactic, that a certain species has
qualities that we find admirable when we see them in human beings, and
therefore we shouldn't be cruel to them:

"If a goose.s mate or chicks become sick or injured, she will often refuse
to leave their side, even if winter is approaching and the other geese in
her group are flying south."

And also that they're similar to us:

"Researchers at Middlesex University in Britain recently reported that
ducks even have regional accents, just like humans!"

In practice I'll bet that all the tender emotions I've had towards animals
have been from noticing a similarity to humans, or some interesting or
unique quality I've learned about. So it's probably an effective tactic.
But it doesn't hold up - by all accounts chimpanzees are vicious murdering
hierarchical motherfuckers. And would you have less givings about eating
ducks if you learned that gang rape is a regular part of their life cycle?

Rape Among Mallards (in Reports) Jack P. Hailman; Frank McKinney; Julie Barrett; Scott R. Derrickson;
David P. Barash Science, New Series, Vol. 201, No. 4352. (Jul. 21, 1978), pp. 280-282.

Not to mention homosexual necrophilia,
(choice quote: "Another drake mallard raped the corpse almost continuously
for 75 minutes.")

I don't think you can use your heart to make policy decisions in cases
like this. You have to have some kind of rational, careful work in
philosophy. It might be based on principles at bottom that are basically
intuitive (like that the more sentient an animal is the more wrong it is
to harm them) but that at least assigns animals worth independent of the
kind of tenderness that they happen to trigger in us. An animal who's
lifestyle is basically evil by human standards, that involves rape and
infanticide on a regular basis, does not deserve more cruelty than a
similar animal that is gentle and pairs for life (awwww).

Tuesday, August 29, 2006

Ok, I'm serious this time... NOW does anyone want to go to McDonald's with me??

A while back I mentioned an article about how a low-fat diet has not improved life expectancy.

Last night I read one of the best articles I've read in a long time: "The Soft Science of Dietary Fat" by Gary Taubes. Science 30 March 2001:
Vol. 291. no. 5513, pp. 2536 - 2545

I can't emphasize enough how fascinating this article is. Read that bad boy! It goes through the scientific and political history of fat in diets, and the US dietary food recommendations (e.g. the food pyramid).

Some highlights:
* In spite of millions of dollars of clinical trials, they have failed to show that low-fat diets extend life more than a few weeks, if that.
* Low-fat diets have led to, shockingly, weight-gain, possibly because people on low-fat diets tend to eat more carbohydrates, which are, arguably, worse for you. They also eat less protein. People tend to eat a consistent amount of calories. So if they lower calories in one area, they compensate in others.
* A ten-year study of over 300,000 Americans found no link between fat consumption and heart disease.

Fat increases cholesterol, cholesterol can clog arteries, which can cause heart disease, which can cause heart attacks, which can cause death. All these facts are true, but the connections between them are so subtle and complicated by so many factors that lowering your fat to prevent death is simply not supported by any evidence.

In my previous post on this matter a few people mentioned that the study did not differentiate what kinds of fat are being eaten. Though some fats may be more hazardous than others, it's not easy at all to know what kinds of fats you're eating. What I'm most opposed to is the simpleminded "less fat is good" attitude. To quote the article:

"To understand where this complexity can lead in a simple example, consider a steak--to be precise, a porterhouse, select cut, with a half-centimeter layer of fat, the nutritional constituents of which can be found in the Nutrient Database for Standard Reference at the USDA Web site. After broiling, this porterhouse reduces to a serving of almost equal parts fat and protein. Fifty-one percent of the fat is monounsaturated, of which virtually all (90%) is oleic acid, the same healthy fat that's in olive oil. Saturated fat constitutes 45% of the total fat, but a third of that is stearic acid, which is, at the very least, harmless. The remaining 4% of the fat is polyunsaturated, which also improves cholesterol levels. In sum, well over half--and perhaps as much as 70%--of the fat content of a porterhouse will improve cholesterol levels compared to what they would be if bread, potatoes, or pasta were consumed instead. The remaining 30% will raise LDL but will also raise HDL. All of this suggests that eating a porterhouse steak rather than carbohydrates might actually improve heart disease risk, although no nutritional authority who hasn't written a high-fat diet book will say this publicly."

I'll have the Big Mac meal with a coke.

PS: I just read that the movie "Supersize Me" actually might have caused an increase in McDonald's stock. Apparently the stock was significantly higher for the year after the film's release, and Wendy's stock was not. Personally, the effect the movie had on me was to remind me of how much I liked Big Macs. Before that I was strictly a quarter pounder man.

Looks like the war on cancer has "failed" as spectacularly as AI.

In a recent post

I scolded Skeptic magazine for it's AI-is-a-failure article. In it I suggested that cancer research has suffered from the same fate. Turns out this is even truer than I could have imagined. In a wonderful article by Jerome Groopman, "The Thirty Years' War"
he describes how the "war" on cancer has done close to nothing in terms of improving cancer treatment. Most of the increased survival rates are attributable to improved early screening and other prevention. As with AI, cancer research has suffered from grandiose predictions. As with AI, it's complicated, difficult, and enormously important. How about it, Skeptic? If they want to be consistent, they should write an article bashing cancer research too.

The article is fabulous; I encourage everyone to at least give a shot at reading it.

Saturday, August 26, 2006

How Daniel Works

I want to pitch the blog "How I work" by my good friend Daniel Saunders. It's a record of strategies used as a scientist to maximize productivity. Right now there's an MSN discussion up there in which I explain to Daniel how I organize the articles I read.

See his blog at

and that particular entry at

Friday, August 25, 2006

Power Napping

Currently I'm living alone and working from home. I have no externally-imposed schedule. This is probably the closest I'll ever come to a completely natural sleep cycle (though I go out dancing a lot which messes it up.) I've found that at 3:00 or 3:30pm I have the strong desire to take a nap. I sleep 20 minutes and I'm back to work.

I've always liked naps, but now the desire to sleep at 3 is so regular I schedule it into my work day. I'm also getting a daybed or couch in my office at Carleton.

I just read this article that says we should take a 20 minute nap 8 hours after we wake! It's interesting that that's exactly what my body has wanted to do.

Shame on you, Skeptic magazine!

I like Skeptic magazine, in general, but their article on how AI is a failure is very disappointing.

Read it if you like, but in essence it says that we have not created anything close to human-level intelligence in spite of a great deal of effort put toward it, and overblown predictions.

So what is this article saying that's interesting? True, we have not created anything near human-level intelligence. This is not news to anyone. Does this mean AI is a failure? Well, sort of, but the article seems to take the position that any AI progress doesn't count unless it achieves it's final goal. But certainly there has been progress in AI. Just ten years ago you could not talk to computers on the phone when you call your bank or credit card. AIs land our airplanes. There is no mention of such progress.

It's also true that AI has a history of overblown claims. Specifically, things take much longer than anyone can anticipate. But it's not that none of the goals have been met (AIs can play grandmaster-level chess, for example). I think it's unfair to judge the success of the field by the time predictions of the scientists in it. The fact is intelligence is more complicated than anyone thought-- and this keeps happening. We're still learning how hard AI is. That makes the research project a failure?

Let's look, for example, at cancer research, which has been around for longer than AI, and, I would conjecture, has had a great deal more money thrown at it. Is cancer cured? Hell no. In fact, it seems we've hardly made a dent in it at all. The (slightly) growing survival rates are mostly due to earlier detection rather than novel treatment approaches (I read this somewhere but can't remember the reference-- can someone help me out with this?) But where's the article in Skeptic magazine about how cancer research is a failure?

The problem is that cancer and AI are incredibly important issues and are worthy of continued effort in spite of their difficulty. Though the article does not come out and say it, we are left with the incorrect notion that AI has made no progress at all. From there it's a short step to cut it's funding. Bad idea. AI is the most important thing in the world.

Sunday, August 13, 2006

Hypothetical facts

If you're not into AI or cognitive science you may have a lot of trouble understanding this post. Sorry.

I'm preparing for a class I'm about to teach, and I'm teaching myself the programming language Python, because in a month I'm going to have to teach my students Python. I thought I'd implement a simple inference function to do modus ponens.

Refresher: Modus ponens is a simple inference that basically says if the following two statements are true: "If P then Q" and "P" then one can conclude "Q"

I've been thinking that much of memory is stored in the mind as facts. Short statements, each of which is true given some context, or "microtheory." In support of this notion, the Cyc database, a long-term project deticated to encoding, by hand, all (estimated) 10 million facts people have for a commensense understanding of the world, switched to facts after they found their Minskyan frame system unweildy.

For me, facts are represented as two or more concepts connected with some relation. So, for example, if a particular dog is red, say "dog2" it might get represented (dog2 has-color red-color). This means there's a concept in memory for dog2, the has-color relation, and the red-color concept. The two concepts dog2 and red-color are connected with the relation. So far so good. My dissertation AI was built like this.

So now we come to modus ponens-- I just encountered a problem I'm shocked I'd never encountered before. Let's say we want to store "If it's raining, you are wet." You might encode this as (its-raining conditional-relation you-are-wet). For AI people who scoff at the idea of its-raining being a single symbol, imagine it as a pointer to a complete fact, such as (state-of-world4527 has-weather rain).

But encoding it this way, according to my system, a symbol for (its-raining). But having this fact implicitly implies that the fact is true. That is, if there is a fact in memory its-raining, then the agent believes it is indeed raining. Why? Because I think that facts default as true. I believe that under cognitive load, if a person hears about fact x and that fact x is false, they will, given resources, remember the fact as true first, and the caveat that it's false if attentional capacity permits.

So when I encode (its-raining conditional-relation you-are-wet) in my system, I'm also stating its-raining is true, and our conclusion, you-are-wet! Not good.

One way out of this is to, when learning the conditional, attach explicit unknown truth values to the arguments of the conditional. This seems very messy to me, however. Like when a new fact comes in, the fact is checked: if the relation is conditional-relation, make the arguments of unknown truth value (unless of course those facts already exist in memory with truth values).

Maybe there's no way around it.

Wednesday, August 09, 2006

Guy pointing inspiration

I was reading Carleton Now, a newspaper at the school I work at, and there's this great, cheesy picture of a guy with his finger out.

Now back in 1993 I had this idea that I would not let any pictures get taken of me with a serious expression. My mother hates that period because every picture with me in it had this ridiculous expression, even if everyone else was looking normal. Partly for her sake, I stopped.

I'm at it again! I'm inspired by this picture to be pointing in pictures taken of me. I can't wait.

Wednesday, August 02, 2006

Why does inside knowledge make us laugh?

Reading the same article as in th previous post, and I came across this text:

"Not all objects that have temporal extents need exhibit enough persistence to warrant a persistent identity. Consider Roger dining at a restaurant one evening."

Cognitive scientists might note that the idea of dining at a restaurant is the usual example of a "script," an idea promoted by a guy named Roger Schank. When I realized that in their example the person dining is named Roger, I laughed out loud. Why?

I have inside knowledge. I know why they picked Roger. To the layman it looks like an aribtrary name, and in the paper it acts as such. But for those who know about script theory know where it came from.

Laughter is often expressed to ease tension. It's often evoked by unexpected stimuli. In some psychology experiments, showing somebody square after square, and then showing them a circle can elicit a laugh. Hilarious. Unexpected stimuli are a bit scary. Scary events are often followed with laughter. Jokes work on an unexpected ending.

But it seems there's more going on in this example. My friend Daniel critisizes some "Family Guy" humour because it's just referencing some cultural idea, and there's no cleverness. He's right, but the fact is it makes us laugh anyway (well, not Daniel, but more lowbrow audiences like me).

Frankly I'm a little baffled by this. Ideas?

Phoneme meanings

I just read the following text from an article about the Cyc knowledge base:

"The Cyc predicates relating a category to its immediate supersets and subsets, are, respectively, genls and specs." (Guha & Lenat, 1990)

I read the last three words, however, as "genus and species." Doesn't it look a little like that? At first I thought it was a neat coincidence, but actually, it's probably not, because words are not the most basic unit of meaning-- phonemes are as well.

Studies show that certain phonemes in a language tend to have specific meanings. For example, the phoneme "gl" at the beginning of an English word tends to mean reflected or indirect light, as in the words glimmer, glare, glow, glance, glaze, and glass. It's because of phoneme meanings the "Jabberwocky" poem works at all.

So it's likely that "gen" and "spec" sounds have something to do with generality and specificity, both in those words as well as genus and species.

The following word origins are from
general (adj.)
c.1300 (implied in generally), from L. generalis "relating to all, of a whole class" (contrasted with specialis), from genus (gen. generis) "stock, kind" (see genus). Noun sense of "commander of an army" is 1576 shortening of captain general, from M.Fr. capitaine général. The title generalissimo (1621) is from It., superlative of generale, from a sense development similar to the Fr.

(pl. genera), 1551 as a term of logic (biological sense dates from 1608), from L. genus (gen. generis) "race, stock, kind," cognate with Gk. genos "race, kind," and gonos "birth, offspring, stock," from PIE base *gen-/*gon-/*gn- "produce, beget, be born" (cf. Skt. janati "begets, bears," janah "race," jatah "born;" Avestan zizanenti "they bear;" Gk. gignesthai "to become, happen;" L. gignere "to beget," gnasci "to be born," genius "procreative divinity, inborn tutelary spirit, innate quality," ingenium "inborn character," germen "shoot, bud, embryo, germ;" Lith. gentis "kinsmen;" Goth. kuni "race;" O.E. cennan "beget, create;" O.H.G. kind "child;" O.Ir. ro-genar "I was born;" Welsh geni "to be born").

Guha, R. V., and Lenat, D. (1990). Cyc: A Midterm Report. AI Magazine, Fall 1990, pp32--59