Saturday, October 31, 2009

Accidental Peer-to-peer Leak?

In an unintentional example of the fluidity of information in social media, The Washington Post found out which members of congress are being investigated by a congressional committee, because a staffer saved a list to a part of her computer that was visible to a peer-to-peer network, and someone saw it.

From the Washington Post:
In the breach, the report was disclosed inadvertently by a junior committee staff member, who had apparently stored the file on a home computer with "peer-to-peer" software, congressional sources said. The popular software allows computer users to share music or other files and is easily available online. But it also allows anyone with the software on a computer to access documents of another user without permission, as long as the users are on a file-sharing network at the same time.
Here's a slightly older version from Yahoo News, for those who don't want to sign up with the Washington Post.

I think it would be odd if the main thing to come of this was a crackdown by spooked lawmakers on peer-to-peer networks.  From Yahoo:
The Recording Industry Association of America said the disclosure was evidence of a need for controls on peer-to-peer software to block the improper or illegal exchange of music. Some lawmakers have tried for years to bring this about.  Mitch Bainwol, the group's chairman and chief executive officer, said, "It's now happening (in) Congress' backyard, and that should be a powerful catalyst to enact real reforms to protect consumers."

Saturday, October 24, 2009

Wikipedia Article Process


The process of creating my Wikipedia article brought me back to basic research, albeit done digitally.

There was very little material available on the internet on my subject, 1930's food media personality George Rector.

I ended up resorting to newspaper archives, primarily the New York Times, since Mr. Rector achieved his greatest fame in that city.

Google was only useful once, when I checked Google Books for Rector books, and found a contemporary review from Kirkus.  This was helpful because the Kirkus website does not include the full texts of old reviews.

I found this of note while reading up on my first attempt at posting an article:  "In general, sources with NO editorial control are not reliable."

Does Wikipedia have editorial control?  I suppose a Wikipedian would argue that it has after-the-fact editorial control, because if something wrong or biased is posted, someone else will eventually see it.  Wikipedia considers other old fashioned, top-down edited sources of information more reliable than sources without editorial control, but does Wikipedia consider itself a reliable source?  I assume so, because it tells people making posts to include sources and references.  But somehow this seems somewhat circular.

Wikipedia seems to make it difficult to find the link to post new articles.  Then it gives you lots of warnings, before allowing you to post a new article.  I think I have gotten fewer warnings when erasing the hard drive on my computer.

I suspect Wikipedia rightly assumes that there is more need for peer editing, than for new articles, so it may bias its user interface in favor of editing, instead of editorial expansion.

Wikipedia Waiting Period

Hmmm.  So I spent about 6 hours today researching and writing a new article for Wikipedia.

I'm guessing this is about 6 hours more than the average poster.

Anyway, I ran into a problem.  I posted it in the work in progress page, and apparently because I have not been a registered user long enough/have not edited enough articles, I do not yet have access to the "move tab," so I can't post it.

So the world will have to wait to read my article as a full-fledge Wikipedia entry.  Sigh.

At least you can read the work in progress.

It is about the author of a cookbook my grandfather gave to my grandmother as a present, and which my mother still has (and still uses).  George Rector was a forerunner to Julia Child.  He had a cooking show on network radio in the 1930's, had a Broadway musical written involving the trendy New York City restaurant he ran with his father before prohibition, and he wrote food columns for the Saturday Evening Post.  He also appeared in a Mae West movie...  As himself!

Sunday, October 18, 2009

Don't Be Evil

Google's official Don't Be Evil policy seems similar to other corporate behavior policies, except for the part about allowing dogs.  The main difference seems to be that it is written in a more casual manner.


The long version of this policy may be a significant change from the early days of the company, when "Don't Be Evil" was described in a 2001 Wired magazine article as "What Sergey says is evil." 

At the time, the article went on to say:  "Most major companies refer to a detailed code of corporate conduct when considering such policy decisions. General Electric devotes 15 pages on its Web site to an integrity policy. Nortel's site has 34 pages of guidelines. Google's code of conduct can be boiled down to a mere three words: Don't be evil."

The current Google Code of Conduct is more than 6,000 words, and about 11 pages long, depending on formatting.  As was already starting to happen when the 2001 Wired article was written, things have gotten significantly more complicated for Google as it has grown.


The phrase may have been been coined by an engineer in the company several years earlier, but it gained official status in the eyes of the business world during Google's IPO.


The motto is a positive one, although it is somewhat limited.  Don't Be Evil is not the same as Be Good.


This Management Today article from March 1, 2007 takes the argument a step further:


Slogan Doctor: Google - Don't Be Evil.
Look at Google's investor relations pages, under 'Google code of conduct', and you find a reference to its famous 'informal corporate motto'. The code itself takes 5,500 words, the slogan just three. At a meeting in July 2001, a dozen or so of the search engine giant's first employees were thinking about 'core values'. An engineer called Paul Buchheit announced that everything they had been talking about could be summed up in one phrase: 'Don't be evil'. The slogan stuck. Evil is one of the most resonant words in our language, making this phrase stand out from the usual corporate platitudes, especially in the US, land of witch-burning, evangelism and Star Wars. In the early days, not being evil seemed simple. But now that Google has to deal every day with tyrants, pornographers, neo-Nazis and sellers of comedy ringtones, the slogan is almost a liability. Almost, but not quite: note that it prohibits being evil but not doing evil. Which suggests that Google, while convinced of its own righteousness, will do what's good for Google.

How Does Google Search Work

Here is the obvious part of what happens when I type in a search on Google:

My query goes to one of Google's servers, probably one relatively geographically close to me.  The server passes the query on to a database, which uses the PageRank formula to spit out a bunch of results, based on "more than a hundred" (or by some estimates, about 200) factors, the details of some of which are kept secret by Google, so the results can not be compromised, and so the competition can not take them.  They include how popular a site is, and how many links to a site there are, and based on my observation, the previous searches conducted at a particular computer.  The server then spits back a result.

At the same time, under the hood, Google is seeking to put small pieces of code on my computer called cookies, if they are not already there.  They store preferences about the site, like how many results I want to display on the page per view, and what language I want to use, or whether "Safe Search" (no dirty pictures) is on or off.  These also allow Google to track what web pages I visit, and to send me advertisements that match the information Google gleans from the searches and the web surfing history.  Some of the advertising cookies are from doubleclick.net.  Cookies often are set to expire, some of Google's cookies last for decades, although a search of my home computer found cookies from several other firms also set to expire decades in the future.  Google recognizes your computer, and remembers it for at least a year-and-a-half.  If you click on an add in Google Search, Google may place a short-term cookie in your browser, which tracks whether or not you bought any thing at the site the ad sent you to.

I asked two people what they think Google does when they perform a search.

Person 1:

What happens when you make a search?
The site cross references the words you enter with previous searches, and it lists results by comparing the sites that have been acessed in response to previous searches.  That also influences the sites that come up and the order they come up.  [The ranking] is also influenced by the advertisers, what order they come up in the search.

It is one of those things you take for granted.  Explaining it is like explaining how to play a game of Monopoly.  The act of explaining Monopoly, is more complicated than playing the game itself.  Explainging how a search works is more complicated that performing a search and letting the technology work its magic.


What info do you think Google collects about you?
Far more than i like to acknowledge or think about.  I'm not sure.  I have a Gmail account, often I access that site while performing searches.  I'm not sure how closely it is able to customize my search results based on previous searches.  Most of my searches have a common thread, related to my [schoolwork].
If I'm not logged into my Gmail account, I'm not sure they can identify me, if I am using a common computer.  As we are talking I am realizing how ignorant I am of the process.  I don't know if they can track who I am based on my individual computer. I don't know if they can identify my individual laptop.

Metacrawler results tended to be more accurate, but I defaulted back to Google because it is everywhere.  I rarely use the general Google search, I usually use Google Scholar, or Google Government, Google News.

Person 2:

What happens when you use Google?
I type the search in and somehow the system scans thousands of databases.  It brings up the best result, the closest fit to what I typed in.  Wikipedia usually comes up second, first is usually an encyclopedia entry.

Do you know what information Google keeps about you?
I never thought about that.

Google Advertising Cookies Part 2

As mentioned in the previous post, here is the page where you can opt out of Google's advertising cookies.  Here is a link to the web browser plug-in to more permanently opt-out.  It also appears that on at least some browsers, (like Safari), you can stop at least most cookies like this by blocking third party cookies, in a preference menu of the browser.

An advertising privacy policy includes Google's explanation of what it uses cookies for, under the heading "How does Google use cookies to serve ads?"

Google Advertising Cookies Part 1

This is an interesting article from IDG News Service (a tech publisher) explaining Google's advertising cookies.  Essentially Google is now planning to put a cookie on a user's computer which gives them ads based not just on what they type into the Google search engine, but based on what they view when they surf the web.  The cookie will apparently track all of a user's surfing, although a Googler lawyer says the company will not target advertisements to people they believe are children, or to medical conditions.  I had not thought about this, but it makes sense.  If you go online and look at a web site about diabetes, it may mean you have diabetes.  I suspect the average user would not want what could be information about their medical condition to be commodified this way.  I wonder what other categories Google will track, and advertise to?  How about sexual orientation?  Some people would have no problem with this information being traded online, but others would want it kept private.

This also struck my interest:

Ironically enough, the way that Google suggests people opt out of its cookie-based interest-tracking system is by allowing it to set a special cookie on their computers. However, the people that opt out of cookie-based tracking systems also tend to clear the cookies from their computer from time to time, which would result in Google once again tracking their interests via cookies.  To resolve this problem, Google also offers a plugin for Firefox and Internet Explorer which will maintain the opt-out cookie even if other cookies are cleared from the browser.

So in order to opt out of the Google advertising cookie, you have to install a piece of Google software on your computer.  Or stop visiting Google.


I'm not wooed by Google's argument on its official blog that this will make ads more interesting.  However, I think there is some truth in this statement, from that blog post:
Advertising is the lifeblood of the digital economy: it helps support the content and services we all enjoy for free online today, including much of our news, search, email, video and social networks.
There is also an interesting section near the end of the IDG News Service article:

While Google will determine surfers’ interests based on the sites of its AdSense partners that they visit, other companies have more ambitious plans for tracking surfers’ online habits in order to sell targeted advertising.  In the U.K., a number of Internet service providers are considering adopting the Webwise service sold by a company called Phorm, allowing them to track all the sites that surfers visit. BT Group has said it will have the system in operation by the end of this year. However, the system has raised privacy concerns, and the European Commission has written to the U.K. government on three occasions asking it to ensure that the system complies with Europe’s laws on personal data protection.

It seems like there could be a lot of things you would have to opt out of, if you want to avoid being tracked this way.  Perhaps too many for most people to keep track of.

Google Books Legal Issues

Perhaps a model for a Google Books licensing deal could be provided by the music industry?

Radio or television stations that play music obtain the rights to do so by paying groups representing the composers (although many musicians have thought they should have cut of the royalties, also).  This arrangement has been imperfect, but it has lasted for a long time.  The artists benefit because at least some of them get royalties, and because their work gets more recognized and consumed when it is broadcast, so more people will buy their music or go to their shows.  Authors or their heirs could also benefit from a similar arrangement for books.  The nature of digital record keeping allows for more exact royalty payments than were possible in the days when radio stations played music off records or CD's, and kept track of airplay on paper, during audit periods.

In extremely broad strokes, I think there needs to be a deal allowing Google Books, and any other company, to pay a standardized fee to authors/publishers/rights holders, for permission to put digital books online.  Much of the disagreement seems to be a matter of money, not philosophy:  The libraries are afraid Google will gouge them.

Digital represents a faster, more easily searchable way of publishing.  Many of the differences between printed reading and digital reading is the result of technology which is still in flux.  Reading online can be scattershot because of hypertext.  Reading on a screen can be difficult on the eyes because the resolution is low, and the screen is backlit, and the design is often poor.  The differences we have discussed in class between digital reading and book reading seem relatively minor:  Discovering a book on a shelf, as opposed to finding it in an online search involves only a small part of the experience.  Searching a book using a text search box does not seem much different from using the index, since both direct your attention to certain specific parts of the book.  Kindle-type devices might eventually provide the same resolution and readability as the printed page, hopefully without the multimedia and hypertext distractions.  This could virtually erase the difference between print and electronic reading.

Google Co-founder Writes Article in NY Times on Google Books



It was very considerate of Google co-founder Sergey Brin to write his New York Times Op-Ed piece just in time for our class on Google!  (If NY Times blocks access, class members can look it up in here or here.)


His description of the modern day preservation of printed material reminds me of a scene from medieval Europe, in which a handful of monks copy books by hand in an abbey, while barbarian hordes sack and pillage the countryside, using folios for kindling and toilet paper...
Books written after 1923 quickly disappear into a literary black hole. With rare exceptions, one can buy them only for the small number of years they are in print. After that, they are found only in a vanishing number of libraries and used book stores. As the years pass, contracts get lost and forgotten, authors and publishers disappear, the rights holders become impossible to track down. Inevitably, the few remaining copies of the books are left to deteriorate slowly or are lost to fires, floods and other disasters.
His point is that Google's effort to digitize books will preserve them, and make them more easily available.   Digitization could certainly make book content more easily searchable, and instantly available. But he also seems to think that printed books are pretty much useless.  "...Even if our cultural heritage stays intact in the world’s foremost libraries, it is effectively lost if no one can access it easily."   While it is true that digitizing text makes it more efficient to access, I think it is an exaggeration to imply that civilization would be lost, without Google Books.    "Because books are such an important part of the world’s collective knowledge and cultural heritage, Larry Page, the co-founder of Google, first proposed that we digitize all books a decade ago," Brin writes.  


Brin's take on this is overblown.  For centuries, people have read books, even though they were not instantly available from anywhere on earth, and even though instant Boolean logic text searches were impossible.  Digitizing books should make them easier to access, but even if they are not digitized, Brin is exaggerating the danger to world culture of not digitizing them.  The full version of a New Yorker article quotes Barry Diller as saying, after an interview with Brin and Page "I left thinking that more than most people they were wildly self-possessed."  Brin's Op-ed piece seems to support this.


Brin also mentions a book which he says is no longer available, The Stanford-Lockheed Meyer Library Flood Report.  But technically, it is available.  After a search of less than a minute at the state library system web site, I found four copies of it, the nearest at the New York Botanical Garden.  



Author
Buchanan, Sally.
Title
The Stanford-Lockheed Meyer Library flood report / Sally Buchanan, Phillip Leighton, Leon Davies.
Imprint
[Stanford, Ca.] : Stanford University Libraries ; 1980.
Rating

LOCATION
CALL #
STATUS
  Bindery Library
 Z701.3.F55 B83 1980      
  CHECK SHELF
Phys Descr
56 p. : ill. ; 28 cm.
Note
Cover title.
Bibliog.
Includes bibliographical references (p. 28) and index.
Subject
Books -- Conservation and restoration.
Restorative drying.
Flood damage prevention.
Flood damage.
Other Auth
Leighton, Phillip,
Davies, Leon A.
Stanford University. Libraries.
Other Titl
Meyer Library flood report




However, here is where Brin does have a good point:  Driving to the Bronx to check out the book  would take much of the day (and the Botanical Garden's library is closed Sunday and Monday), reading it there would take all day, and interlibrary loan, (if it is available) could take days if not weeks for the book to arrive in my town.  Downloading the book online could take a matter of seconds.  Whether Google does it or somebody else does it, I suspect the future holds a greater number of digital books, and a lesser number of printed books.

Saturday, October 17, 2009

Google is Inside My Head



In case there is anyone else in the class who had not found time to track down the New Yorker article Colin assigned (here is the abstract), I tried e-mailing it as a pdf to the class through Blackboard.  


If that did not work, you can find the full article by going to the Trinity College library, searching for The New Yorker under "Find a Journal or Newspaper by Name," then searching for the author or title.  I had to install VPN software to get access to Lexis-Nexis from my home computer.  The software is available free from Trinity.


* * * * *


Going through the process of tracking down the article made me realize how pervasive Google is inside my own head.  Part of its philosophy is also present in my actions during the search.  After I looked at the link to the abstract that Colin provided, I used The New Yorker's own site search to see if I could find another way to get the full article without paying for it.  That turned up empty, and a look at the nifty magazine viewer ran into a request for four dollars, which I refused.  I seem to share Google's ethos that information should be free.  As the Auletta article said, "Google has reinforced the notion that traditional media now want to combat: that digital information and content should be free and that advertising alone should subsidize it."  Or maybe I am just cheap and reluctant to give my credit card info to a company I will probably have no future contact with.


My next step in the search for the article, was of course to go to Google.com.  I used the company's search engine to  find the author's web site.  That site now includes a link to the New Yorker abstract, but not the full article.


Out of habit and experience, I then turned back to Google's search engine, and performed an advanced search of the New Yorker web site.  Then I searched the whole web to see if a reliable source had reposted the article.  After a couple of minutes of looking, both approaches came up empty.


Instead, I went back an older search tool available to me as a Trinity student:  LexisNexis.  This brought back memories of my sitting in the pre-renovation Babbidge Library at UConn as an undergrad, in 1995 or so, being amazed that I could search through articles from newspapers across the country.  Ah, the good old days.  Sigh.


LexisNexis is based on an older economic and media model:  A user or institution (college, law firm, etc) pays for access to the company's controlled databases, and usually gets fairly reliable information from a professional source, like a newspaper magazine, or government record.  The breadth of content is limited by what LexisNexis' staff creates or licenses.  The Google model is more recent:  The users typically pay nothing for information from the web that someone has decided to release for free, and the users must decide if a given blogger or web site can be believed.  The money comes from advertising and some data collection about the users.  The breadth of information users can access is wider, but the users must watch more carefully for incorrect information.  Social networking sites like Facebook might create a slightly different model:  The information you search for comes from "friends" who you have some previous contact with.  The user again pays nothing, and the money comes from lots of personal information collected about users, and from advertising.  Much of the information users would search for seems to come from the broad information source of the web, but filtered through the user's "friends."  Because of that filtering, social networks would offer the narrowest source of information of the three.


Google is my first choice search engine, because a number of years ago, I tried it and it reliably offered better searches than the other engines of the day.  In recent years I have not even bothered to try other search engines.  Google still offers a useful way to access the broad information source of the world wide web.

Friday, October 9, 2009

Odd "Video Game"

This is strange.  It is a video game played on top of some of the internet's most popular or influential web sites, some of which we will be talking about in this class.

Maybe it is a commentary on the way Google inserts advertising into everything on the web.  Perhaps it is a statement that the web has become nothing but an advertising vehicle.  It could be a commentary on the drive to "monetize" the internet.

Or maybe it's just a bunch of wavy lines and bitmaps.

Monday, October 5, 2009

Garlic September 26



After buying some garlic at the supermarket, I noticed some of the cloves were sprouting.  I planted one of them September 6, 2009.  It currently lives next to my window.

Sunday, October 4, 2009

Facebook as the Mall

As a public space, I think Facebook is more like the mall, than the town green.

The purpose of the town green is simply as a common space.  If a group with a controversial social message wants to gather there, they can, regardless of what their message may be.

The purpose of the mall, and of Facebook, is to sell you things.  Social functions may take place in either location, but when push comes to shove, the reason the mall exists is to get you to spend money at The Gap.  The reason Facebook exists (so far) is to get you to look at advertisements.  To be fair, this is also the reason most newspapers, TV stations, and radio stations also exist: To sell ads and make money.  Both Facebook and the mall are for-profit companies.

The issue of "monetizing" the web has come up in class and in the readings several times.  A look at Facebook's site for advertisers shows one way that the company has broken even, and hopes to make a profit: By using the troves of data people enter about themselves to allow advertisers to target their media buys.

If you enter your birthday, advertising can be age-specific, even to the point of appearing ON A USER'S BIRTHDAY.  Location, Age, Sex, Keywords, Education, Workplace, Relationship Status, Relationship Interests, and Languages.

So an endorsement by Bob Dylan can appear to users aged 51 to 59, and an endorsement by Kristen Stewart can appear to women 18 to 22. Targeting based on Keywords would allow the advertiser to put these endorsements next to any web site that mentions "Blood on the Tracks", or "Twilight."

People can be targeted by college, or even major, which tells advertisers a lot about a user's likely income, and allows advertisers to know who is a doctor, or a lawyer, for instance.

Because people enter so much information about themselves on social media sites, advertising can be very targeted, and presumably more successful.

* * * * *

Some advertisers are taking another approach to social media like Facebook, by hoping to make the users into the advertising vehicle, according to the Advertising Age article.  I don't think I would do this for a for-profit company like Red Robbin.  If I am an advertising medium, I should get paid more than a "possible cash prize."  And as an individual I would be very reluctant to rent out my reputation.  But I might participate in something like this for a non-profit, or for a business that had somehow done something really spectacular to help me (Example:  If I left my IPod Nano in a restaurant, it got swept into the trash, they went into the dumpster and pulled it out for me.  But I don't own an IPod Nano, so I guess I will not be endorsing any businesses any time soon.)

I think an Advertising Age commenter on the bottom of the article makes a valid point:
"By ann | Darnestown, MD September 30, 2009 11:50:20 am:  The devil here is in the details of the survey practice. IMHO, rewarding someone for taking a survey with "possible cash prizes" then asking them to fill out a recommendation isn't any different from awarding "possible cash prizes" for recommendations. It's a pretty transparent bribery scheme. I agree recommendations work better than fans but when a company pays for those recommendations how can a consumer trust them???"

Journalism Training for Bloggers?

The article What Is Journalism’s Place in Social Media? shows a need for training not just for journalists in the age of social media, but also for hobbyist bloggers/social media content providers, and at least a little training for information consumers.

We have expressed concern in class about the declining quality of traditional journalism as budgets and staffs get squeezed.  Perhaps part of the answer is to train people who blog as a pastime to be better, and to be aware of some of what good journalism really is, and why it is more fun than being a shill for a particular side of the political debate.  Not a full-blown college class, but something less time consuming and less expensive, perhaps more on the order of a continuing education class, or online seminars, or simply guidelines for bloggers who want to be taken seriously as journalists.  A quick check online did not turn up any classes like this from any sources I was familiar with, although there must be something available.  There are already a number of journalism training groups, to say nothing of J-schools at colleges and universities.  

In the past, news consumers have usually been able to quickly evaluate a source's veracity more easily.  If the information in question is a daily newspaper or a national news magazine, it often could be considered fairly reliable.  If the source is a friend gossiping about what they heard from someone else, the information might be less reliable.

If newspapers and magazines are being replaced by bloggers online, it becomes more difficult to tell which blogger you can believe.  Perhaps something like this assistant professor's guidelines, or this.

Garlic September 25




After buying some garlic at the supermarket, I noticed some of the cloves were sprouting.  I planted one of them September 6, 2009.  It currently lives next to my window.

Saturday, October 3, 2009

Wildly Unscientific Facebook Survey


It's not just unscientific, it might also be invalid!

In response to ongoing privacy concerns about Facebook, I posted this message on my profile on the site:  "Quick unscientific poll for my media class: Who knew/did not know the applications you use in Facebook (Farmville, Mafia Wars, Bejeweled Blitz etc.) can collect information when you give them access to your profile?"



I also included a link to a Washington Post story describing some of what I was asking about. I got four responses from my Facebook Friends:


  • Knew that.
  • I knew that. But I made the assumption that it was only info that you made available on your profile. Not anything that you may have input when you registered but choose not to have on your profile. Not sure if that is the right assumption or not though....
  • knew that, but not exactly sure which info is not accessible
  • Why are you trying to make me hate my farm? Hmmmmm???


All four responses were from women in their 20's or 30's, (I don't know if that says something about me or about Facebook), each of whom I would consider fairly tech-savy.

I would have theorized that most people are not really aware of the information they are giving away to those applications, but most of the people who answered said they knew about this. Perhaps the people who did not know, or don't think about such things would be less likely to admit it in a public forum like my wall?

The answers do still include some uncertainty. Perhaps Facebook should try to make its policies more noticeable? Right now, they are down at the bottom of the homepage, which is a fairly common place for a policy like this to be on other sites.


First Siting of Proto-Facebook...?

Mark Zuckerberg is the guy who later designed Facebook...  (Although some other students later claimed he took some of their ideas, and sued).

Although The Facebook had not been created yet, this 2003 article in the Harvard Crimson notes some of the same issues we are talking about now, like privacy, hurt feelings, and something a young person posted online (ostensibly) going beyond the audience he originally intended for it.

Facebook Journalism: Helpful but Overblown?

This is a response to something that was not really the main part of Colin's post:  "I wanted to transition over the Facebook this week partly because it has gigantic implications for the legacy journalists"


I don't think Facebook itself has gigantic implications for old-school journalists.  Maybe it could in the future, with some significant changes, but for now, I think it has some newsgathering advantages, but is far short of a game changer.


FaceBook (and other social networking sites) can be useful for newsgathering:  They provide a new way to try to find a specific person, beyond old (early 2000's) methods of looking someone up in an online phonebook, Googling him or her  with their home town as one of the search terms, or searching the web sites of organizations they are associated with.    Social networking sites frequently provide local television news, and other media, with a way to get photos of people they are doing stories on.  Watch this video from WFSB, for an example.  


But Facebook has a number of limitations as a newsgathering and dissemination tool.


First, for news distribution, there seems to be no economic model to sustain it.  Facebook seems to be a private version of the entirely public space of the internet.  The Internet is the town green, Facebook is the mall.  Because Facebook is a private space, it controls the ads, and keeps the revenue.  The mall and Facebook exist to make money, not to provide a forum for public debate.  The public forum is the town green, or the internet.   At least some of Facebook's guts are open source.  Perhaps a non-profit version could solve this, and reduce the site's propensity for gathering personal information, presumably as a way of making money?


Media organizations that have presences on Facebook (CNN, Los Angeles Times,) often simply post a collection of links to stories back on their own web sites, in hopes of driving traffic to those sites, where CNN and the LA Times sell the advertising.  Although Facebook's design allows for efficient transmission of news through feeds, the money is not there to support original, Facebook-only news operations.  Unless somebody comes up with an as-yet-unforseen way to pay for it.


On the news gathering side, the improvement in contacting sources is only incremental.  Facebook just the latest in a long line of tools that make it easier for people and groups to go online (and to be found easily by journalists).  A few years ago, MySpace was the spot to find people, before that bloggers were the instantly searchable source of potential experts, before that the web itself could be searched easily for groups that had web sites, before that topic-specific listserves were considered terrific resources.  And that's only going back to the late 1990's.  Each of these new tools was better in some way than the previous one, but none of them had gigantic implications. 

The reporter in the BeatBlogging story who found the alumni from the Scotland School for Veterans Children on Facebook, could just as easily done a web search, as I did, and in less than a minute found the regular web site for the alumni association, along with contact information.  


Facebook offers a wealth of information about "Friends" and family, but so far, it has done little to change news gathering and distribution.

Garlic September 24



After buying some garlic at the supermarket, I noticed some of the cloves were sprouting.  I planted one of them September 6, 2009.  It currently lives next to my window.  Nice cloudy light this morning.

Friday, October 2, 2009

Garlic September 23




After buying some garlic at the supermarket, I noticed some of the cloves were sprouting.  I planted one of them September 6, 2009.  It currently lives next to my window.

Thursday, October 1, 2009

Garlic September 22



After buying some garlic at the supermarket, I noticed some of the cloves were sprouting.  I planted one of them September 6, 2009.  It currently lives next to my window.