Photo credit: Lou Fasulo
Saturday, October 27, 2007
I didn't know this, but apparently the Hugo awards has an art category (see http://www.nippon2007.us/hugo_winners.php) This year's winner is Donato Giancola. I went looking for his images with Google Image Search and found some beautiful stuff. Two of these pictures are shown here.
I think the one with the dragon is superb. It's unusual in a few ways. First, it depicts a winter scene. For some reason outdoor fantasy paintings take place in sunny summers. There are some exceptions, and I tend to really like them. You can see some of my favorites at:
Also, the dragon's wings are colored in an interesting pattern, like many contemporary artists depict dinosaurs. The wings are back lit, which makes for a very beautiful image. The dragon has just snatched a horse from a snowy hill. The horse looks awkward and completely at the dragon's mercy. This is a good choice, as it emphasizes the dragon's size and power. I've always been fascinated with big animals. When I was in high school I would decorate my walls with images I liked. It was only after a wall was full that I noticed a pattern: humans in the same image with bigger things, usually animals, like whale sharks.
Also note that the dragon isn't even looking at the humans in the picture. It's just looking ahead. It's as though it's picking up the horse as we might pick up gym bag on our way out the door.
It's probably a book cover illustration, but still-- I like that the humans depicted have a guard/prisoner relationship. It's intriguing, makes you wonder what the story is behind this. What will the guard do now without the horse? Will the dragon be back?
I think this is the best kind of fantasy art: it brings about a sense of awe and beauty by showing us what the world could be like.
Friday, October 26, 2007
In the short video embedded above (another TED talk, if I might pitch TED again) shows a computer interface prototype that is, no doubt, fun to watch. Unfortunately, the talk is only fun.
It's fashionable to complain about the desktop metaphor in the computer science community: it's too old, it's hard to find things, whatever. There are lots of valid criticisms, and watching demos like this are inspiring, and make you think "wow, if I only had this things would make a lot more sense."
It takes some careful thinking to realize all the wonderful things about the interface of current computer systems. For the rest of this essay I'll assume you've watched the video above. It's short; it's worth it.
Note that he never opens any applications. If things are tacked to walls and in piles on the floor, do you have minimize or close all of your open applications so you can see enough of the floor to find it?
What about labels? If the pile has no label, you can only know what it is and what's in it by cueing your memory based on it's location, size, and what's on top.
And hierarchy? I'd go crazy if I had to give up my hierarchical organization in my file system! I have some folders going five or more levels deep! Imagine all of those as piles on the floor. Unlabeled. I'd go nuts trying to find the right file.
I'm not saying these problems are insurmountable, but they need to be addressed, and the talk did not address it. When you give a sexy demonstration, if you don't back up what you're doing and address the concerns people will eventualy think of, you are gaining a short-term gee whiz reaction and sacrificing the long term taking-seriously of the project.
Saturday, October 20, 2007
I try to keep this blog positive and interesting, but sometimes I just need to vent what's bothering me. To this end I have created a new blog just for that purpose:
Jim Davies: Rants
I don't advise anyone to read this only occasionally funny torrent of negativity!
Thursday, October 11, 2007
Back between Fall 2005 and Fall 2006 I was in Kingston, Ontario, and a member of the improvisational theatre team The Improv Show.
One of the members, Kristen Rodgerson, made a little documentary about improv and our group. I'm featured a little, mostly in the background during the credits.
Anyway, I've embedded it above, but you can also watch it here:
(or if that fails try http://www.youtube.com/watch?v=yWmAaPmuX68)
And if that doesn't interest you, you can see the world's worst cover of Europe's "The Final Countdown"
Friday, October 05, 2007
Wired magazine recently put out the Geekipedia, an encyclopedia of things geeky. The entry for Artificial Intelligence is terrible. I won't even link to it because it doesn't deserve a high ranking on Google. If you want to find it look here and search.
Anyway, I wrote a letter to the editors of Wired regarding this (thanks to Alison Way for editing). We'll see if they print it...
Reports of A.I.'s brain death have been greatly exaggerated
Wired's Geekipedia entry on Artificial Intelligence is disturbingly naive for an otherwise technologically literate magazine, repeating statements made in journalism over the years that have no empirical support that I've ever been able to find.
The great progress made in vision and walking are dismissed as not being A.I.-- a classic case of the "A.I. Effect," which makes people dismiss anything that actually works as not being A.I., thus robbing the field of its successes. By the way, the author neglects to dismiss, much less acknowledge, speech recognition.
Further, the 'pedia refers to A.I's practical returns as "meagre,"ignoring that A.I.s answer phone calls, schedule our gates at airports the world over, land the airplanes that fly us to and from these very gates, facilitate call centers, vacuum our floors, detect credit card fraud, destroy land minds, devise and run biology experiments, and have generated millions upon millions of dollars of savings. A single military scheduling application paid back DARPA's investment of 30years in AI research! Check out the chapter on AI successes in Kurzweil's "The Singularity is Near" for a summary. Without the deployed A.I. technology we have developed so far it is not overreaching to suggest that our world, as we know it, would grind to a halt.
The most absurd claim is the only one that is new-- that A.I.'ssupposed failure is due to a lack of consideration of philosophy! This claim is wrong on two counts. First, the A.I. community has always paid attention to philosophy, and taken its wisdom when it was useful. Second, to claim that "grappling with philosophy" will solve the grand problems of A.I. underestimates the technical complexity of commonsense thinking.
It's crazy as saying the we are never going to resolve quantum mechanics and relativity until we "grapple" with metaphysics.
Jim Davies, Assistant Professor
Institute of Cognitive Science
Carleton University, Ottawa