"Filter Bubbles" are So Much Better Than What We Had Before
Several people I know have told me about a TED talk online about content filtering on the web. The thesis of the talk is that if content is personalized when delivered to you (depending on what you like, etc.), you'll be missing out on important information.
You can see the talk here: http://youtu.be/B8ofWFx525s
He complains that algorithmic editors do not have the "embedded ethics" (time 6:30) that human gatekeepers of information do.
How does he say that with a straight face?
Traditional editors for journalism have their eyes on the bottom line. They might mean well, but all news, anyway, is biased toward things that are timely, changing, and negative. Thus, if the war in Chad is still going on with no change, it won't make the front page, even if it is the most important problem facing the world day after day.
The traditional media make us hear about new things, sensational things. And since we have a bias to think important those things that we see often (the availability heuristic), we get a skewed notion of what's important.
At least with algorithmic customization, you have some influence on what you see, and you can often "dislike" something, be it with an explicit command, by not clicking on it, etc. Some of us will still drift toward the sensational and fluffy, sure, but others won't.
The other problem with human gatekeepers of information is that they decide what masses of people will see. We all end up with the same information. This is much worse than everyone getting different things, from a societal point of view. I conjecture that the situation in which everyone knows the same information, even if it's broad, is worse than if everyone knows different information, even if it's narrow. If we all know the same stuff, we'll all think in more similar ways.
He advocates imbuing the algorithms with a sense of importance and of challenging the content consumers. I think the importance idea is good, and possibly practical. I sure hope that if it happens it uses better guidelines than what traditional editors have (someday I'd like to write about book defending my loathing for all traditional news sources).
However, I fear that people, in general, do not want to be challenged, and content gatekeepers that force on us challenging views will lose eyeballs to those that do not.
That's a fancy way of saying they'll go out of business.
So I partially agree with him, but I guess I'm not emotionally with him. I find traditional media so infuriating, and content filtering so wonderful, that I'm still riding the initial wave of bliss.
The next time I see an article about a newspaper going bankrupt, I'm going to make sure I click "like."
You can see the talk here: http://youtu.be/B8ofWFx525s
He complains that algorithmic editors do not have the "embedded ethics" (time 6:30) that human gatekeepers of information do.
How does he say that with a straight face?
Traditional editors for journalism have their eyes on the bottom line. They might mean well, but all news, anyway, is biased toward things that are timely, changing, and negative. Thus, if the war in Chad is still going on with no change, it won't make the front page, even if it is the most important problem facing the world day after day.
The traditional media make us hear about new things, sensational things. And since we have a bias to think important those things that we see often (the availability heuristic), we get a skewed notion of what's important.
At least with algorithmic customization, you have some influence on what you see, and you can often "dislike" something, be it with an explicit command, by not clicking on it, etc. Some of us will still drift toward the sensational and fluffy, sure, but others won't.
The other problem with human gatekeepers of information is that they decide what masses of people will see. We all end up with the same information. This is much worse than everyone getting different things, from a societal point of view. I conjecture that the situation in which everyone knows the same information, even if it's broad, is worse than if everyone knows different information, even if it's narrow. If we all know the same stuff, we'll all think in more similar ways.
He advocates imbuing the algorithms with a sense of importance and of challenging the content consumers. I think the importance idea is good, and possibly practical. I sure hope that if it happens it uses better guidelines than what traditional editors have (someday I'd like to write about book defending my loathing for all traditional news sources).
However, I fear that people, in general, do not want to be challenged, and content gatekeepers that force on us challenging views will lose eyeballs to those that do not.
That's a fancy way of saying they'll go out of business.
So I partially agree with him, but I guess I'm not emotionally with him. I find traditional media so infuriating, and content filtering so wonderful, that I'm still riding the initial wave of bliss.
The next time I see an article about a newspaper going bankrupt, I'm going to make sure I click "like."
Comments