Saturday, November 21, 2009

University-based reporting, or university-assisted reporting?

(Reposted from my School's new weblog.)

In an article for the Chronicle for Higher Education this week entitled "University-Based Reporting Could Keep Journalism Alive" [] media scholars Michael Schudson and Leonard Downie Jr. discuss the fact that "in recent years, more journalism schools have plunged into producing news for the public" (including ours):

Florida International University now has an arrangement in which the Miami Herald, Palm Beach Post, and South Florida Sun-Sentinel use the work of student journalists. Columbia's Stabile Center for Investigative Journalism has in its few years of existence had students produce work that has appeared in The New York Times, the Albany Times Union, Salon, and on PBS and NPR. Students at the Berkeley Graduate School of Journalism have produced work for the public posted on the school's news Web sites. It is beginning another news Web site in cooperation with San Francisco's KQED public radio and television stations. The Walter Cronkite School of Journalism and Mass Communication at Arizona State University runs the Cronkite News Service, which provides student-reported work to 30 Arizona client news outlets, while other ASU journalism students have worked as paid reporters in the Phoenix suburbs for the Web site of the major metro daily in the city, The Arizona Republic. Similar work is taking place at Boston University, Northwestern University, the Universities of Maryland and Wisconsin, and elsewhere.

While department-run student newspapers, special seminars on investigative reporting, and exclusive internship relationships with professional journalism projects are not new in journalism education, Schudson and Downie argue that the Web has enabled such reporting to reach a much wider audience, in a much more timely manner, than ever before: "Publishing for the general public can now be done at minimal cost—no need to contract out to a printing company, no need to distribute to newsstands—just construct a Web site. Distribution has moved from major barrier to trivial expense."

Here at UW-Madison, of course, our situation is different than those of the stand-alone schools of journalism at Columbia and Arizona State where Schudson and Downie Jr. work. We're a School of Journalism & Mass Communication (SJMC) whose teaching, research, and service span a range of media industries and knowledge-production practices, from the analytical, investigative practices of careful journalism (whether online, on air, or in print), to the targeted, persuasive practices of ethical strategic communication (whether by businesses, non-profits, or governments). Our classes incorporate not only the skills and concepts necessary to succeed in these industries, but the context and understanding necessary to understand how these industries work together (and sometimes work against each other) in a global media ecology.

So for our School, the connection between our undergraduate and graduate educational mission and our larger knowledge-production research and service mission is what motivates our participation in community journalism projects where our students "produce news for the public." And rather than going it alone, we prefer collaborating with local, professional media firms and non-profit organizations. Here are just a few examples:

  • Madison Commons. [] This innovative online partnership between both local/neighborhood organizations (the East Isthmus Neighborhood Planning Council, the South Metropolitan Planning Council, and the Northside Planning Council) and local for-profit media (The Capital Times, Wisconsin State Journal, Isthmus, and Channel 3000) was created by SJMC Professor Lewis Friedland and the UW-Madison Center for Communication and Democracy. It's a great example of graduate student researchers and community citizen journalists working together with both democratic civic groups and local mainstream media.

  • Wisconsin Center for Investigative Journalism. [] Started by longtime SJMC lecturer Andy Hall, WCIJ is "a first-of-its-kind alliance with public broadcasting journalists in six cities around the state, plus students and faculty of the journalism school at Wisconsin’s flagship university" which "combines innovative technology with time-tested journalistic techniques to increase the transparency of official actions, intensify the search for solutions to governmental and societal problems, strengthen democracy and raise the quality of investigative journalism." SJMC Professor Jack Mitchell sits on the board, and three current SJMC students plus one recent SJMC graduate work as reporters in the project.
  • All Together Now Madison [] This project, spearheaded by Brennan Nardi (editor, Madison Magazine), Bill Lueders (news editor, Isthmus), Andy Hall (executive director, Wisconsin Center for Investigative Journalism), and our own SJMC Professor Deborah Blum, ATN is "a collaborative journalism endeavor by news media in Madison, Wisconsin, to produce print, broadcast and online reports on a common theme." The project has connected to several SJMC reporting classes already. Their first set of reports, on "Our Ailing Health Care System," are available now.

Schudson and Downie ended their article by reminding us that, "Thinking through what universities can do for journalism requires some serious conceptual work about how best to integrate the legitimate educational and research missions of the university with service to society." I've only thrown out a few of the concrete connections to live, investigative, community journalism that our School has helped to create and nurture, but I think that each one of them fills that double role that Schudson and Downie suggest. Anybody want to chime in with more examples, or propose further ideas?

Sunday, September 20, 2009

Update: Blogging the Digital Labour Conference

As mentioned previously on this blog, the University of Western Ontario's Faculty of Information and Media Studies will be hosting a conference on Digital Labo(u)r, October 16-18, 2009.

I will be in attendance at the conference and will blog coverage of it here, including notes on sessions and other happenings of potential interest to the readership.

If you can't make it to Canada, stay tuned here for reports from the event.

Wednesday, September 16, 2009

Murdoch on Digital Journalism: The Ultimate Union-Buster

One must at least admire Rupert Murdoch for his unabashed franchise. The Financial Times reports that Murdoch, in a seeming about-face, has come to herald the new era of Kindle and other similar electronic newsreading devices from a truly pragmatic standpoint. Although he predicts up to 20 years for the devices to surmount the current paper and ink industry, Murdoch waxes rhapsodic on the future portended by such a shift:

"'Then we’re going to have no paper, no printing plants, no unions,' said Mr Murdoch, who battled printing unions at his Wapping plant in London more than 20 years ago. 'It’s going to be great.'"

Saturday, August 08, 2009

Conference: Digital Labour: Workers, Authors, Citizens

Readers of this blog may be interested in attending or following this upcoming conference at the University of Western Ontario, October 16-18, 2009. It looks to be fascinating.

Digital Labour: Workers, Authors, Citizens

A conference hosted by the Digital Labour Group, Faculty of Information and Media Studies, The University of Western Ontario, October 16-18, 2009, London, Ontario, Canada.

'Digital Labour: Workers, Authors, Citizens' addresses the implications of digital labour as they are emerging in practice, politics, policy, culture, and theoretical enquiry. As workers, as authors, and as citizens, we are increasingly summoned and disciplined by new digital technologies that define the workplace and produce ever more complex regimes of surveillance and control. At the same time, new possibilities for agency and new spaces for collectivity are born from these multiplying digital innovations. This conference aims to explore this social dialectic, with a specific focus on new forms of labour.

Read more at the conference website.

Adaptive Technologies, Labor and the Practice of Hindering Access

[NB: This entry started out as a response to Greg's post below, but grew verbose enough to mandate its own space. This also marks my first official entry on this blog; thanks, Greg, for this opportunity to participate in the dialog.]

An interesting post, and one that prompts me to reflect upon my own past working with "adaptive" or "assistive" technologies for people with a wide range of different abilities, such as blindness and hearing impairment or deafness. For many of these populations, such capabilities, such as text-to-speech functionality or the ability to use a "captioned" telephone (c.f. the product created by local-to-Madison company CapTel), actually enable and facilitate individuals' own labor. In some cases, people who may have been late-deafened left the workforce, only to return once these adaptive technologies became available to them (allowing for business-related phone use, for example). Interestingly, in the case of relay services, whether traditional(TTY) or more modernized telephone "captioning" services, an immense amount of human labor is required to make these services function (see this diagram from CapTel for a simplified explanation of the process)- not to mention what goes into television closed captioning, but I would do well to leave a discussion of that to Greg. Natural language translation is one of the last great computing frontiers and a programming/processing conundrum. Automating it with any kind of success ratio involves a great deal of human intervention - often at low-paying wages and shift-work, outside of these companies' engineering departments. It is just one example of how a highly technical product/service is entirely intertwined with its unskilled labor that is at the core of its functionality. Taking the human interface out of this loop, while undoubtedly the ultimate goal for company management, is simply not feasible technologically at this point. Yet, to the end-user, this human intervention is entirely invisible, by design. Captions appear like magic and almost instantaneously on the phone unit, giving the appearance and the illusion of an entirely automated process.

Meanwhile, and only slightly tangentially, true text-to-speech functionality that does not require human labor at its delivery point is being challenged by a hodge-podge of industry players who would like to eliminate it from the Amazon Kindle. In this case, a technology that holds immense promise for legions of potential users - including people who are blind or visually impaired, people who have dyslexia or other types of text- or language-based impairments - is being threatened by the content industry (e.g. the Authors Guild of America, the MPAA), who perceive this facilitating and potentially life-altering technology - one which requires no human intervention beyond user and device - as a potential impediment to its seemingly unfettered earning potential. This issue is further complicated by its introduction at the World Intellectual Property Organization (WIPO), where it has been contextualized primarily as an issue of industry retention of DRM/TPM over its content, rather than one of access and fairness, as many affected by the disabling of text-to-speech might be more likely to characterize it.

It would be intriguing to see a cost-benefit analysis that accurately reported the amount of money and hours the industry coalitions are expending (not to mention the loss of PR capital, translated into real dollars) to make sure that blind people and those with dyslexia are eliminated from benefiting from an adaptive technology - one that could have profound positive outcomes for engaging people in the digital labor economy. What happens when these industry representatives turn their targets on screen readers and other assistive technologies that allow many people to do their jobs, provide access to computers and allow for people live and work in a digital context when their different abilities might otherwise make that impossible?

[The U.S. Copyright Office published a Notice of Inquiry on this topic in March of 2009 and receive 33 comments during the comment period, one of which was filed jointly on behalf of the American Library Association (ALA), the Association of College and Research Libraries (ACRL) and the Association of Research Libraries (ARL) and can be read here. Other comments, filed by disability advocacy groups, private citizens, and content industry attorneys and others, can be accessed here.]

Thursday, July 23, 2009

Uncovering speech-to-text labor

My most recent book concerns a form of information labor I refer to as "speech-to-text" labor — the work of transcribing and translating, whether after-the-fact or in realtime, a person's spoken words to printed text. For over a century, the use of special stenographic systems of listening, memorization, and notation has represented one means to accomplish this labor, aided by an ever-changing mix of technologies, from Stenotype keyboards to laptop computers. Another means, dating back not quite as long, has employed speech recording and playback devices, from the wax cylinders of early dictation machines to the embedded digitial audio recording chips of today. But either way, a human transcriber/translator was always involved at some point in the process.

For many decades, however, a third means to accomplish speech-to-text labor has been in the works: one which attempts to substitute computational algorithms for human listening and judgment, these days often quite succesfully. Whether for producing records of courtroom testimony, displaying captions for late-night television, or developing transcripts of global wiretapping efforts, the act of interpreting, understanding (to a degree), and transcoding human speech seems to be a task which, given a smart enough program and a fast enough machine, computers ought to be able to do.

An interesting posting over at the BBC technology blog "" caught my eye recently because I think it exemplifies the fact that even with the latest versions of these kinds of technologies, human labor is nearly always still present in the speech-to-text loop — sometimes because humans provide more accuracy in the final product, and sometimes because humans represent a more lower-cost, scalable, flexible way of accomplishing these tasks. The case in question is a venture called Spinvox, "a great British technology success story, using brilliant voice-recognition software to decode your voicemail messages and turn them into text." The blogger's question was, do machines really decode these voicemails, or do humans?

Still wishing to be convinced that it was people not machines listening to my messages, I tried another tactic. It was suggested to me that if I recorded a message and then sent it five times in a row to my mobile, then a computer would provide the same result every time. Well my message was deliberately stumbling and full of quite difficult words - including my rather tricky name. But every version that came back to me in text form was radically different - and pretty inaccurate. So unless Spinvox is employing a whole lot of rather confused computers to listen and transcribe messages, it sounds like the job was being done by a variety of agents.

Why does this matter? After all Spinvox has always been clear that there is a human element in the work - though when it says it can call on "human experts for assistance", you might imagine Cambridge boffins rather than overseas call centre staff. But the fact that so much of its work still appears to rely on people simply listening and typing could have implications for its finances and its data security.

I don't find it surprising that Spinvox would rely on such a spatial, temporal, skill and wage division of labor — farming snippets of complicated translations out, 24 hours a day, to a dispersed network of highly-structured and inexpensive spots around the globe for nearly-instant human decoding. I do find it interesting that "security" is the main concern here. The idea that a snippet of a voice mail, decoded by a low-wage call-center worker, could represent a security risk to the caller or the receiver reminds me of the late 19th century concerns (which I explored in my first book) that telegraph messenger boys would find insider investment knowledge by peeking into the printed versions of telegrams that they hand-carried into and out of the electrical wired networks. (Who knows, if this worry over the security of transcribed and translated voicemail takes hold, it might motivate the same kind of solution for some as the problem did a century ago — writing and speaking in code.)

For my part, I think the most interesting aspect of this case is that the boundary between what we think of as a problem amenable to a technolgical fix (speech-recognition software) versus a spatial/social fix (situating countless individuals in time and space who can provide piecemeal labor on demand) is still very blurry. Voicemail itself — especially when accessible through a personal, mobile device — is a technology meant to enable its privileged user to arrange the time and space of his or her own working day for maximum convenience, flexibility, and productivity. We need to remember that the freedom of one group's mobility and flexibility — even in such a small case as this — may very well come at the cost of another group's fixity and constraint.

(UPDATE: The story over at continues for another post, with a response from the firm.)

Friday, July 17, 2009

Rethinking the labor of blogging "uncovering information labor"

Hello readers (all three of you). I find my blogging production has evaporated as I've been strugging with my new academic role as the Director of the University of Wisconsin-Madison School of Journalism and Mass Communication. I'm hoping to catch up on some backlogged ideas here soon, but in the meantime, I'm going to open up this blog to some trusted collaborators — like one of the graduate teaching assistants from my UW-Madison course on "The Information Society" — who will likely have much smarter (and much more timely) things to say about information labor than I have lately.

More soon, I promise.