Saturday, May 08, 2010
I'm trying to write an ambitious book, here
I'm currently trying to write an ambitious book. It's broad in scope. I'm trying to tie a lot of things together, and as a result I have to talk about a whole lot of topics in which I am not an expert. Luckily, I have a lot of friends who are scholars of various sorts, and I can ask them for advice. For example, I can ask a social psychologist and an epistemologist philosopher whether there is any evidence that people tend to believe the things they hear as a default, unless they have some reason to disbelieve. Or, I can ask an English professor if there is any evidence that the second-person point of view, in fiction, brings people into the world of the novel more than other points of view.
Sometimes I get a straight answer with a single reference. This is ideal. I state the fact, cite the paper, and move on.
Often, the issues that I'm asking about a bit thorny. Complicated. And the experts I'm asking know the ins and outs of the complications. The problem is that sometimes I'll ask about the truth of a fact and I will be referred to a 50 page journal article and two books. This is a little exasperating, since the fact I'm asking about is one of hundreds I need to make the points I want to in my book. I simply don't have time, nobody does, to get into the details of the answers. I just need to know, generally speaking, whether I can treat what I want to say as true or not.
Some people, I expect, don't like this. Granted, the world is complicated, and the details are important. Some people should, indeed, be well-versed in those details. But not everyone. If books of wide scope are to be written at all, there needs to be a brief description of the state of the art for writers to use. Some things, of course, do not have straight answers. Sometimes the truth is sufficiently complicated such that the kind of summary I'm asking for is senseless (e.g. the statement "Humans are inherently good.''). But I think these kind of statements are rarer than people realize. And expertise in an area often makes one so sensitive to the nuances of the arguments and evidence of the field as to render the expert unable to summarize it as an outside observer might. They know each doughnut so well they can't tell you the nature of the doughnut shop.
I firmly believe that books of the type I'm trying to write are important. Science, and scholarship in general, is fragmented, and there are often few incentives for understanding anything outside of your subfield, let alone your field. At the same time there's a recognition that this segmentation results in duplicated efforts and a narrowness of how problems are approached that can hinder progress. Books, being long, can take a little time and try to draw things together.
When I get referred to a book, it's really not that much help. I can't take the time to read the book. If I read deeply into everything I'd never get my book done. In these situations I'm a little stuck. I can ignore the fact, and try not to state it at all; I can state the fact and not cite it (hoping for the best); or I can try to ask someone else, which is usually what I do.
I wonder if the people I'm asking think that I really want to get into the details (Do they think that my whole book is about this one topic?). Or if they think that more reading is more, rather than less, helpful. Or if giving me a short answer would require too much of their time. Would they rather have me spend days reading a book than spend a few minutes explaining it to me? I wonder.
When people ask me about something that I'm an expert in, I try to give them a simple answer and only go a level deeper if they pursue detail. Here are some examples of questions and answers I get a lot as an AI expert:
Q: Will computers ever be intelligent?
A: Some computer programs are already intelligent, by most cognitive scientists' definition of "intelligence." As to the question of whether or not their general intelligence will ever surpass that of human being, there are smart people on both sides of the issue, and it remains contentious in AI and cognitive science.
Q: Why can't I just tell my computer what I want it to do?
A: Getting computers to understand language has proven to be very difficult, and the problem is not solved. Solving it will require knowledge of language that we don't yet have, and giving computers common sense reasoning, which has also proven to be very difficult.
The moral: if you can, give simple answers until asked for more detail.