In computer science people can get attached to methodologies. If your research is methodology-based, then you are in danger of over-applying it. The evocative metaphor is "if all you have is a hammer, then you look at every problem as a nail." Let's call this the hammer rule.
Please Hammer Don't Hurt 'Em!
On the other hand, you need to have a consistent research theme for long periods of your scientific career. You research theme should be some theory, some statement about the world that you endeavor to support through your research. In AI and cognitive science, this might involve the use of some methodology (e.g. Bayes nets, neural nets, case-based reasoning). You see the apparent conflict here.
Lately I've been interested in creating a cognitive architecture based on analogy (a cognitive architecture is a combination of a theory of how the mind works as well as a high-level programming language for cognitive modeling). In my efforts to do this, I've been going through all of the major things the mind can do and thinking of how they could be done analogically. Am I breaking the hammer rule? Yup.
The hammer rule is better used in applied/engineering settings, and is less appropriate for science, where parsimony is a major factor in theory design. If you're a computer scientist trying to find, perhaps, the most efficient way to do things, then the hammer rule must be taken to heart. But in science, we want the simplest explanation that accounts for the data. It is the responsibility of the scientist to push their ideas as far as they will go. If we can explain all of cognition with logic, or case-based reasoning, then that's great.
As my man Aaron Sloman says (1984) "It is sometimes a good strategy to adopt an extreme position and explore the ramifications, for instance choosing a particular language, or method, and acting as if it is best for everything. This can have two consequences. First, by striving to use only one approach one is forced to investigate ways in which that approach can be extended and applied to new problems. Secondly if the approach does have limitations we will be in a better position to know exactly what those limitations are and why they exist."
If you're a scientist who uses a different idea for every domain you approach, you are in danger of being scattered. This is a bad career move, even though you might be doing decent science.
My AI friend Kevin Murphy likes to say "are you working on solutions or are you working on problems?" (the correct answer, in his mind, is problems). The intention of this advice is to keep you from making solutions to problems that are not real. Again, I think this is a rather applied/engineering perspective. In engineering, if there is no real problem to solve, you're doing nothing important. In science, if your "solution" is a theory, however, then you're doing it right-- apply your theory as broadly as possible, look for new intellectual problems for it. It's good for your career. In fact, it drives me nuts when I look at faculty webpages and they say things like "My research interests are implicit memory and categorization," and say nothing about what their theory is. If you're defining your career with a problem, you're in trouble. If you're defining it as subject areas, you're even worse off.
As an exercise, you can look at some of the faculty at Harvard's psych department
and see who does it right. Who makes a substantive statement about the world, and who just lists phenomena they like to look at?
And if you're unconvinced, ask this of yourself-- are the best and most famous scientists in history known for their areas of interest, the problems they tried to tackle, or their theories?
Sloman, A. (1984). Why we need many knowledge representation formalisms. In M. Bramer, (Ed.) Research and Development in Expert Systems, Proceedings BCS Expert Systems Conf. Cambridge University Press 1985. pp 163--183.