Monday, November 19, 2007

The labor of reading -- wirelessly

Big news in the world of reading today. First, as Tom Regan reported this morning on the National Public Radio News Blog, "A new National Endowment for the Arts report says Americans are reading less. And young people are reading a lot less."
The report, To Read or Not to Read: A Question of National Consequence, found that the average person between 15 and 24 spends 2 1/2 hours a day watching TV and seven minutes reading. Between 1992 and 2002, the number of young adults (18-24) who voluntarily read a book each year (we're talking about one book here) dropped from 59 percent to 52 percent.

Lest you think this only applies to books, the full report (available as a PDF for your online reading convenience) points out that the research team looked at "all varieties of reading, including fiction and nonfiction genres in various formats such as books, magazines, newspapers, and online reading". (If you don't have time to read the whole report, you can always consult the executive summary which the NEA so thoughtfully and rather ironically provided.)

The second reading-related news item of the day is notable precisely because it blurs those very categories of "reading books" and "reading online". As the New York Times weblog Bits reported today, Amazon.com has entered the portable digital book market with a combination $400 handheld reading device and a iPod-touch-like mobile shopping experience (using nearly-ubiquitous cell phone networks rather than locally contingent WiFi hotspots). Saul Hansell describes the content pricing model:
Amazon has 90,000 titles for sale at launch, including books from all major publishers.

Best sellers and new releases will cost $9.99. That represents a substantial savings off of Amazon’s already discounted prices. Amazon is currently selling hardcover bestsellers for roughly $13 to $20 and trade paperbacks for $8 to $11.

The Kindle will also download and display newspapers, magazines and blogs. But in an era when most Internet content is offered free with advertising, Amazon has decided to charge monthly fees for these publications.

A follow-up post by Brad Stone at the NYT already speculates about what a future release of the Kindle might incorporate -- suggesting the current version might have some significant problems to overcome with consumers. A visit to Amazon.com's own Kindle product page reveals some of the initial public reaction to this product -- at least from those who themselves choose to spend time reading and posting to Amazon.com review threads. Comments seem to range from "I have been using it for about 2 months and it has changed the way I read," to "$400 is not a price point that interests me at all. I would pay half that perhaps, but only if I could also read things in different formats". And there's plenty more text for online reading about this device already -- even a Wikipedia article, with over four dozen edits since about 7am this morning. (Maybe this makes some sense, since free online access to Wikipedia is one of the touted features of the Kindle -- granting a serious sort of legitimacy to the open-source encyclopedia that shouldn't be minimized.)

What's my take? The technological form factor (battery, size, screen) together with the wireless ability to purchase an additional book anytime, anywhere comprise the real innovation of the Kindle, I think, but only for omnivorous readers of current popular fiction and non-fiction whose work and leisure lives are so fragmented through time and space (think taxicabs, airports, hotels, cars) that both carrying around a load of books and stopping to seek out a place to dispose of and purchase a new book are burdensome. If I were a manager at Apple I'd seriously think about the ramifications of adding e-text reading power to a next large-screen, trade-book-size generation of the iPod, as well as wrapping ebooks more tightly than they already are into the iTunes Music and Video Store (which you'd have to rename again). In fact, I'm surprised Amazon has put together their own hardware solution and not partnered with Apple or Sony (one of the other early e-book entrants).

But the real killer application for an academic information laborer like me, already affiliated with an institution which pays license fees for ubiquitous wired access to physical and digital text? The ability to tap into the PDF resources of my academic library and the databases it subscribes to (ProQuest, JStore, ProjectMuse, etc.) as well as the copyright-free resources of the Google Books Project from a similarly-styled, low-power, cell-phone-network, tablet-form-factor, e-book reader priced at $100. For free.

Saturday, September 29, 2007

The divisions of Web 2.0 labor

I'm attending a small conference this weekend at the University of Utah entitled "Frontiers of New Media: Historical and Cultural Explorations of Region, Identity, and Power in the Development of New Communications Technologies" and having a great time. The keynote was by Henry Jenkins of MIT on various issues dealing with the so-called "Web 2.0" phenomenon which often gets reduced to the soundbite of "user generated content". Henry did a great job of problematizing the terms used not only for "Web 2.0" but for its active participants -- are they users, producers, consumers, "prosumers," "produsers," etc.? But beyond these complimentary and contradictory roles, or even the actually-existing and culturally-imagined social groups which they attach to (and fail to attach to) across the globe, the thing that really started me thinking was the question of what kind of "content" they (we) were producing. What do we even mean when we say "Web content"? What is the work being done? What is the knowledge or artifice being produced?

This concerns me because I see the same blanket statements about "content" (or "knowledge" or "information") being made all through the long history of contact between libraries and computers that I'm currently exploring for my next book project. Today on the Web, when we say "content" we're often referring to amateur, non-profit, or grassroots textual, image, sound, or video products which parallel those of professional, for-profit, or mainstream cultural producers -- insightful blog entries, artistic photographs, entertaining podcasts, or engaging videos. But we produce much more than this. We tag and organize and sort and collate and arrange in chronology in a pattern of "metadata production" (or "metacontent production") just as much, if not more, as we engage in content production. We produce instructions and guides and tutorials for acting in the real world or on physical artifacts, calling ourselves "Make" or "DIY" participants. We arrange activist or expressive or simply exhausting cultural moments, from political protests to zombie performance art, carried out ephemerally and then perhaps recaptured and redigitized as "content" in a second pass later. And we even build algorithms -- tools and caculators and sorters and all those things that all those scientists and engineers and mathematicians thought all those computers would be naturally used for all those years ago. I feel like exploring, mapping, and questioning this vast division of labor is perhaps one of the next challenges for those of us who ponder the meaning of Web 2.0 ... or even of Library 1.0.

Monday, August 06, 2007

The value of your advertising-consumption labor

A fascinating little piece in the New York Times today discusses the value that the company Digitas, one of the newest acquisitions of the global strategic communications giant Publicis Groupe, is supposed to add to their existing suite of advertising firms (including brand names like Saatchi & Saatchi and Leo Burnett):

The plan is to build a global digital ad network that uses offshore labor to create thousands of versions of ads. Then, using data about consumers and computer algorithms, the network will decide which advertising message to show at which moment to every person who turns on a computer, cellphone or — eventually — a television.

[...]

Greater production capacity is needed, Mr. Kenny says, to make enough clips to be able to move away from mass advertising to personalized ads. He estimates that in the United States, some companies are already running about 4,000 versions of an ad for a single brand, whereas 10 years ago they might have run three to five versions.

[...]

Digitas uses data from companies like Google and Yahoo and customer data from each advertiser to develop proprietary models about which ads should be shown the first time someone sees an ad, the second time, after a purchase is made, and so on. The ads vary, depending on a customer’s age, location and past exposure to the ads.

[...]

Mr. Kenny said that Digitas constantly struggles to find enough employees with the technical expertise to use complex data to slice and dice ads for companies like General Motors and Procter & Gamble. As Digitas invests in countries like China and India, he said, the Publicis Groupe will benefit from the global talent pool — and perhaps create more demand for advertising in those countries.

Two very different conceptions of labor are at work in these short descriptions of the Digitas strategy. On one hand, vast legions of low-wage but talented communications workers from across the globe are necessary to generate thousands of different advertising permutations for each campaign and code them with the metadata required for smart computer algorithms to invoke them effectively. These workers would seem to fall somewhere between the "clerical" and the "creative" in the pecking order of advertising agencies. But in either case, the commodities that they produce -- bite-sized, hyper-targeted advertising messages -- are imbued with a huge investment of information labor.

On the other hand sit the targets of these advertisements, the presumably affluent and information-saturated consumers who view these ads not only on old-style mass-marketed and relatively impersonal television screens, magazine pages, and billboards, but on the hyper-customized margins of the web pages they visit throughout the day on their laptop computers, cell phones, and portable gaming devices. What we might think of as their attentional labor time -- the work that these coveted consumers do in the moment that their eyes and brains flit to the advertising message that pops up on their digital screen -- is so valuable as to be analyzed and specified by complicated computer algorithms working on both the back end and the front end of their web interaction, algorithms which use as their raw material both the real and assumed demographic information about these coveted consumers and the matching advertising metadata so carefully produced and entered by those low-paid marketing information laborers around the globe.

What results for me is a vision of immense disparity in global communicative labor: the communication skills of so many being used to transmit messages of such unimaginable granularity into the communicative lives of so few -- all for the purposes of profit maximization. It's an uneven pattern that we can probably see in other realms of message-making as well, from political speech to non-profit fundraising to, yes, academic knowledge production. I wonder if there's value in analyzing such disparities in the labor and value of communication patterns more closely.

Monday, July 30, 2007

The neoliberal university and differential tuition for different majors

A New York Times article this past weekend (Jonathan D. Glater, "Certain degrees now cost more at public universities") alerted me to something I'm ashamed to say I hadn't realized about my own University of Wisconsin — specifically, about the undergraduate degree in our School of Business:

Starting this fall, juniors and seniors pursuing an undergraduate major in the business school at the University of Wisconsin, Madison, will pay $500 more each semester than classmates.

[...]

Officials at universities that have recently implemented higher tuition for specific majors say students have supported the move.

Students in the business school at the University of Wisconsin, for example, got behind the program because they believed that it would support things like a top-notch faculty.

With the tuition and fees for an in-state undergraduate at UW-Madison estimated to be $6,730 for the 2006-07 academic year, a $500 surcharge amounts to nearly 10% of a student's tuition bill.

The political-economic conditions that have inspired this new funding structure include two decades of growing neoliberal governance strategies at both the state and national level. By "neoliberal governance" I mean the philosophy that the workings of capitalist markets can effectively substitute for democratic decisions of cultural value and social justice in every aspect of human life. (For more on this concept, see David Harvey's 2006 book, A brief history of neoliberalism.) In the case of Wisconsin, this policy is evident in the fact that the state has reduced its level of taxpayer-funded support to less than 20% of the total budget of the university. We are to be run "more like a business" according to the refrain at conservative political rallies. Any notion that the public research university is an investment in economic growth, cultural understanding, and basic knowledge production as a resource commons for all is swept aside; the university must become "entrepreneurial," not in the broad sense of fostering learning and innovation in scientific, artistic, and intellectual pursuits, but in the narrow sense of attracting private capital for its operating expenses. The underlying assumption in all this is stark: any operating expenses unable to attract such private capital are by definition not of value in the university, and deserving of cuts rather than subsidy.

The rationale behind our business school adopting this neoliberal model of differential tuition seems to rely on two core beliefs: (1) that business school faculty both require and are deserving of higher salaries than faculty in other units in order to maintain the value of the business school (based on what the market is willing to pay to hire away these faculty both inside and outside of academia); and (2) that business school students are both able and eager to pay a higher tuition in order to maintain the value of their degree (based on what the market is willing to pay to initially hire these graduates). In both of these arguments, "value" is understood narrowly as market value — the price of a salary. Any other definitions of value — say, how to "value" a multidisciplinary and eclectic department of scholars who don't all do the same kind of research on the same kind of topics and who, inevitably, don't all command the same salary in the idealized open market of corporate consulting; or, perhaps, how to "value" a broad and diverse undergraduate education which includes courses taken outside of a single school or department or specialty — are silenced from discussion.

Postgraduate students working toward a Master's Degree or Doctorate, of course, already pay differential tuition in many cases, depending on the professions that they are engaged in. In Spring 2007, for example, a generic full-time resident graduate student paid $4,592 per semester, but a full-time resident law student paid $6,326 and a full-time resident medical student paid $11,132. By comparison, a full-time business masters student paid $5,320. In the business school, there was even a slightly discounted rate for evening MBA students, who only paid $5,103 per semester.

But this new structure in the business school differentiates students not on the basis of their professional specializations after achieving an undergraduate degree; instead, it redefines the meaning of the undergraduate degree itself as a professional degree worth paying a legitimate premium for. I can't offer any insight into whether that $500 undergraduate business school premium is worth the money; however, I would like to question whether it is a legitimate charge, in considering the meaning of the university itself.

But first, a thought experiment. If charging differential tuition based on an undergraduate department is a smart, "entrepreneurial" idea, then why stop at the departmental scale of action? Why not extend the practice to the scale of the individual professor? After all, clearly some professors are valued more highly in the market than others (as seen by the outside offers they get from other universities or firms in private industry). And aren't these the very faculty members (and the ideas they produce and promote) which really make the business school competitive? Instead of charging an extra $500 to all business school majors, the school could simply charge an extra $100 each time an undergraduate takes a class with one of these premium professors — investing the money back into their salaries alone, of course. Such a scheme would be a real incentive to the rest of the business school faculty to innovate!

But then again, why stop at the scale of the individual? I mean, even the most valuable professors sometimes teach courses which aren't as useful to the bottom line of getting a good job once a student graduates (I'm thinking of pesky "history" and "ethics" courses here, but undoubtedly there are others). And within those courses, certainly not all of the material covered in the syllabus ends up on the final exam. Hmmm ... how about having students pay by the day instead of by the course? Faculty could determine which days of lecture are the most valuable — based on the instrumental goals of resume padding and test preparation — and students could pay a $10 premium each time they attended one of those days. (This would have the happy effect of allowing some of those "less valued" professors to at least teach a few days of useful material in their otherwise valueless courses, too.) Students who decided not to attend those days of class wouldn't have to pay the extra premium. Power to allocate the original $500 premium designated to the department would instead go in "micropayments" to those exact portions of valuable courses, taught by those exact professors of value, which make the business school (and the undergraduate degree it confers) competitive. Market logic triumphant!

If those proposals sounded misguided and extreme, then consider this. Instead of demanding that particular groups of undergraduates pay a $500 premium to particular groups of professors in particular departments under the assumption that the knowledge gains produced by one field — or the career outcomes of one student constituency — are more valuable than another, what if all undergraduates were asked to pay an extra $100 and that money was allocated democratically through faculty debate over a combination of factors — which departments serve more undergraduate majors, which departments have the potential to earn revenue from (still-higher-paying) professional graduate students, which deparments represent "market failures" (private industry declining to support the Havens Center for Social Justice, for example) nevertheless deserving of university subsidy, and, most crucially, rational deliberation about what kind of university experience our students (and our society) deserve as a whole?

I'm worried about the new structure that our business school — and our univeristy — have instituted. I work in two similar departments which see themselves as "Schools" (both within the College of Arts and Sciences) to which the lures of differential undergraduate tuition would also be attractive. One is a School of Journalism and Mass Communication; the other is a School of Library and Information Studies. Both could make similar arguments for charging differential tuition to undergraduates. But I also do the kind of intellectual work that puts me on the fringes of the mainstream in both of those Schools. When one department decides it can charge more for its services than another department, it is using the "natural" logic of the market, whether it admits to this or not, to make a value claim about its own knowledge production — and in a university, this amounts to a value claim about the benefit of its work not only to an undergraduate job seeker, but to the culture as a whole. I am not prepared to endorse such a claim, even for my own fields.

On its web site, our School of Business touts that it offers its undergraduates "The resources of a world-class public university with the personal contact of a comparatively small business school." But if that school — or any other — is willing to value its faculty and its knowledge above and beyond that of the rest of the university through differential tuition, then it is cooperating in the same neoliberal agenda that uses stark and simple market logic to decide which "resources of a world-class public univeristy" deserve funding in the first place. In effect, the very definition of the public university is changing before our eyes. Let's not simply look the other way.

Friday, July 13, 2007

The Wisconsin State Representative who wanted to kill the law school, and why it's more than just a silly news story

This summer the Wisconsin Senate and Assembly -- the former controlled by Democrats, the latter by Republicans -- are trying to come to a compromise two-year budget for our state. In recent years, the University of Wisconsin has suffered under state budgets. This season, the Senate seems ready to invest in the university while the Assembly would like to defund it further. At stake is a familiar story of two competing conceptions of the university: for progressives, it is a site of serious knowledge production, a form of cost-effective collective corporate training, and a source of economic innovation to the local, state, and national economy; but to conservatives, it represents a site of public subsidy that should be privatized (economic conservatives) and a site of dangerous indoctrination that should be censored (social conservatives).

Today in the Capital Times comes a revelation that would bring some humor to the entire exercise if it wasn't true: one of the conservative Assembly representatives actually managed to insert language into the official Assembly version of the budget which zeroed out funding of the UW-Madison law school.

A lawmaker who persuaded the Assembly to eliminate all state funding for the University of Wisconsin Law School says his reasoning is simple: There are too many lawyers in Wisconsin.

"We don't need more ambulance chasers. We don't need frivolous lawsuits. And we don't need attorneys making people's lives miserable when they go to family court for divorces," said Rep. Frank Lasee, R-Green Bay. "And I think that having too many attorneys leads to all those bad results."

[...]

"When we have an overabundance of attorneys already, there's no point in subsidizing the education of more attorneys," Lasee said.

Let me first go on record as saying that I disagree with Lasee's proposal, that I think his proposal represents the worst sort of anti-intellectual "legislation by personal prejudice," and that I am appalled that the Assembly leadership let such language slip into their budget proposal unchallenged.

But my bigger problem with this incident is the way it is being treated in the press as some sort of ridiculous and ironic individual aberration ("this legislator wants to get rid of lawyers, ha ha; he must have a personal axe to grind against the law school, what a joke"). Instead I think it represents a real and growing change in the way that the difficult labor of knowledge production is understood and valued in society.

Neither Lassee nor his critics seem to consider the UW-Madison law school a site of knowledge production. Instead, they see it as a site of lawyer production; a site where individual entrepreneurs are trained, credentialed, and then certified for (take your pick) predatory release on the consumer public, or distinguished public service to the citizenry. But lost in all this debate over whether the state should subsidize the increase in numbers of any given occupation, trade, or profession, is the thought that any department of the university does more than job training.

Faculty, staff, and students all over our university are involved in research and exploration, teaching and public service, producing and translating and critically questioning knowledge itself. This function is essential not only to a healthy economy (whether under a conservative or a progressive definition of economic health), but to a healthy citizenry and a healthy culture. To me the irony is that the very institution through which the processes and products of our legal system come under critical, historical, and cultural scrutiny -- the law school -- is itself seen narrowly by supporters and critics alike as a diploma mill, not to mention subject to the personal legislative whim of one zealot or another.

This is a pattern that we've seen -- and continue to see -- again and again, from the calls to privatize public television to the demands that libraries be run "more like businesses." Those of us involved in knowledge production, organization, dissemination, and critique have to challenge these narrow constructions, these stereotypes, these misunderstandings -- and not dismiss them as jokes for the late night talk shows.

Wednesday, June 20, 2007

Web 2.0 is more than just "you"

Time magazine's "Person of the Year" in 2006 was "You," and that person lived in a place called "Web 2.0". This was the "you" of new social-networking and content-sharing web sites like YouTube, Flickr, MySpace, Wikipedia, and -- yes -- Blogger. It was the "you" who labored with the latest personal and portable text, audio, and video production tools to produce free and original content for the World Wide Web -- especially for those Web 1.0 corporations like Amazon and Google who now owned so much of the new Web 2.0 landscape and benefited from so much of that free Web 2.0 labor. But the growth of Web 2.0 wasn't seen as the result of these corporate giants and their projects for commercialization, commodification, brand-building and revenue-growing. Somehow, the success of Web 2.0 was due to you.

The "you" of Web 2.0 was not without contradictions, however. While progressive, both in your technological acumen and in your willingness to open your life to the Internet, "you" were also an amateur, a loudmouth, a zealot, a short-attention-span child pretending to be a grown-up -- alternately posing as a journalist, a politico, an activist, an author, a professor, an expert of one kind or another. If Web 2.0 was ruled by "you," it was the land where "they" the experts were unwelcome, untrusted, underprivileged and even deported. Again, nevermind that most of the ideas, claims, and revelations which were discussed, debated, and derided by "you" in Web 2.0 were actually produced behind the scenes by "them" -- those representatives of powerful Web 1.0 institutions such as corporations, NGOs, governments and universities, still doing most of their knowledge production in Real World 1.0. Somehow, the failure of Web 2.0 rested with you.

And so here I sit, one of "you," typing away at my little corner of Web 2.0 (care of the corporate infrastructure owned by Google and the discretionary time granted by the university which employs me). Folks in my broad field of communication and information studies are still debating whether Web 2.0 is repressive, liberatory, or both (a set of weblog postings by former ALA head Michael Gorman and others over at Britannica.com is the most recent). Yet the more I read about, think about, and experience Web 2.0, the more dissatisfied with both the positive and negative characterizations of it I become. Web 2.0 is an uneven geography, not so much pitting expert against amateur knowledge production, but blurring the spaces between the two, and revealing for all of us the problems of playing both expert and amateur roles -- in both knowledge-production and knowledge-consumption activities -- more intensively and interchangeably throughout our daily times and travels than ever before.

Let me try to lay out this argument for "you." First of all, engaging in the production of Web 2.0 knowledge as amateurs does not necessarily mean that you cease to participate in more traditional forms of knowledge-production as experts. After all, a quick look at the history of "digital divide" statistics at almost any scale shows that it has been the most intensively-educated, most professionally-employed, most economically-privileged members of society who have had the most opportunity and power in building Web 2.0 over the last decade or so (much to the detriment of the utopian potential of Web 2.0, I would add). Most of you creators of Web 2.0 knowledge online continue to wrestle with knowledge offline, whether as managers or teachers, journalists or artists. With any luck, you're bringing your offline expertise online; but even if you're not, that offline expertise is still available to others to bring online themselves. Undoubtedly, though, given the different time-space demands of producing Web 2.0 knowledge (blogs go "stale" after just a few hours of inactivity) versus real-world knowledge (produced according to working weeks, semester schedules and quarterly investors deadlines) you fragment your knowledge production activities in each realm differently.

Similarly, consuming Web 2.0 knowledge resources is more likely a selective activity than a substitution effect (even with that subset of you most likely to produce, and most feared to rely exclusively upon, Web 2.0 knowledge: college students). In times and places where you happen have access to physical information -- or when you place yourself in such settings through social and cultural conventions -- you can still read a complicated book, take lecture notes with pen and paper, deconstruct the painting hanging in front of you. But in times and places with Web 2.0 connections, questions asked can now become questions answered (at least tentatively) through online collaborative encyclopedias, film guides, or photo travelogues. Rather than substitution, fragmentation and reorganization are the activities you experiment with. The online availability of print metadata means that the time you spend browsing for books in the library is vastly reduced. But that doesn't mean you stop going in the first place.

Finally, it is through those connections between Web 2.0 and Real World 1.0 that you bring to bear your new personal, wireless, mobile, and perpetually-active technologies -- from wi-fi laptops to Internet-capable mobile phones. These devices -- like online access and experience itself, still subject to a digital divide along the expected lines -- complicate your current time-space patterns of knowledge production in both Web 2.0 and Real World 1.0. In terms of production, ubiquitous connectivity outside the office means that you can be working on your professional industry analysis or your graduate thesis at home, in transit, or on vacation. But high-speed Web access within the office means that your coffee breaks are no longer spent around the water cooler, but typing on Blogger or uploading camera photos to Flickr. You can consult collaboratively-provided consumer information online while roaming the aisles of the grocery store. But you can also do some instant online fact-checking or footnote-following when you're reading that history book under the covers before bedtime. The physical infrastructure now available to you, allowing you to alter the spaces and times in which you draw from and contribute to Web 2.0 knowledge during your busy day, becomes nearly as important as the original virtual infrastructure that enabled you to produce and consume Web 2.0 knowledge in the first place.

Where does all this leave "you"? Perhaps you are not as important as "they" think. After all, they still build and own those virtual and physical infrastructures -- they being the corporations, organizations, and governments which employ, engage, and serve you. You will continue to restructure your production and consumption of Web 2.0 knowledge, but always within a tightly-coupled dialectic to the production and consumption of Real World 1.0 knowledge. The potential exists for a positive feedback relationship here -- producing more knowledge, in more ways, with more checks and balances, and more points of entry, made accessible and understandable to more people than ever before. But it's a decision that is, perhaps, both up to "you" and out of "your" control.

Monday, June 04, 2007

Reconceptualizing "information labor" as "imaginative labor"

I'm uncomfortable with the term "information labor" -- just as I'm uncomfortable with the terms "information society," "information technology," "information studies," and the like -- but I'm unsure about what to propose as a substitute. In some sense every labor process can be seen to depend on information, every physical artifact can be represented by information, every cultural communication can be reduced to information. But if information is everything then it explains nothing.

There's the term "knowledge work" of course, which implies some sort of greater value than "information labor." "Information" suggests potentially useful but unprocessed data, while "knowledge" suggests a certain intrinsic or predetermined value to that information. The troublesome concept of "truth" also seems bound up in the idea of knowledge more than in the idea of information. Perhaps "information labor" transforms the raw materials of information into knowledge? Perhaps engaging in knowledge work is a precondition to making, defending, and reconsidering truth claims in the world? But then are information workers necessarily less skilled, valued, or compensated than knowledge workers? Still unsatisfactory.

The term "creative labor" carries with it similar problems. We are told that it is to a new "creative class" of workers that we must look in order to rescue our culture, our economy, and our urban environment in an age of political-economic globalization. Can "creativity" be taught or is it an intrinsic gift? Are the products of creative work necessarily meant to contain or produce knowledge? Can't one be creative without having much access to most storehouses of information? And certainly a century of mass communication advertising has shown us that creativity and truth don't necessarily accompany one another. Shouldn't knowledge and information be expected to have a closer claim on such concepts?

Some have focused on the mental mechanics of information, knowledge, or creative work and coined terms like "symbolic analysis." Such work is assumed to be more difficult and thus more valuable than the physical labors of extractive, manufacturing, or service work. At the core of such efforts, it would seem, is the ability to understand, manipulate, and generate utterances in various languages -- spoken or written, numerical or theoretical, visual or musical. Here I'm uncomfortable with the easy split between the head and the hand -- any language seems to me to be biologically and materially rooted in the bodily and environmental history of the individual trying to communicate. But I'm also uncomfortable with the dry reduction of all aesthetic and truth claims to the movement of sign and signifier. Surely we are more than Turing machines.

So lately I've been mulling over the idea of "imaginative labor" as a useful bridge between these different concepts. Imagination requires memory, language, and mental manipulation -- each of which might be augmented by imaginative technologies of all sorts -- but it is something beyond the hundred monkeys hammering out a Shakespeare sonnet at random. Imagination requires a sense of time and space, a sense of change and play, a motivation for moving beyond the status quo (whether to a nostalgic past or a progressive future). And imagination can scale up out of our isolated dreams and diatribes, either in the communication between imaginative individuals or as the shared imaginary enacted daily and transformed over time within a cultural group.

There's something about the various demands which imagination makes upon us that attracts me here. Being willing and able to imagine the world as it is not -- as it once was, as it might be, or as it currently appears from a different point of view -- takes education and empathy and effort. Thus imaginative work seems to be a particular form of labor which is enhanced by quality information, required for productive innovation, and perhaps even essential for daily reproduction.

I think I'm going to try to imagine for a while what such a reconceptualization of "information technologies" as "imaginative technologies" might add to our understanding of our world.

Friday, May 25, 2007

Dispatch from the Wisconsin Idea Road Trip 2007

Every year in the spring, a diverse and engaged group of four dozen or so UW-Madison faculty and staff sign on to a five-day bus trip across the state known as the "Wisconsin Idea Seminar." The purposes are many. The event is certainly a fun and (hopefully) positive public relations event, as evidenced by the participation of scholarship-raising alumni and local newspaper reporters. In an economic environment where direct government appropriations only account for 19% of the university's operating budget, portraying UW-Madison to citizens and legislators all across the state in a positive light is an important goal. But in the end I think we as participants learn more about the state of Wisconsin than the state of Wisconsin learns about us. We've seen a thriving global plastic packaging firm in Oshkosh, an energy-producing dairy farm in the Fox Valley, an agricultural and gaming economy on the Oneida reservation, a mechanized cherry orchard in Door County, a maximum security prison in Green Bay, and several examples of the rich natural environment (and environmental ethics) that are preserved and reproduced by both the university's College of Agriculture and the state Department of Natural Resources. And the trip isn't even over.

One evening during all of this, several of us gathered over drinks on the cool moonlit lawn of our Ephraim bed and breakfast to discuss the themes that had emerged so far. Amidst the good-natured joking and unwinding, some very serious issues quickly emerged. Wisconsin was a state rich in resources, labor, and ideas, but apprehensive about its place in a vast and interlocking set of competitive battles -- for tourist dollars, for state dollars, for corporate investment, for federal notice, for agricultural export, or for global status and prestige. The stark logic of economic competitiveness seemed to structure every conversation, affect every citizen, invade every institution. We consoled ourselves in public proclamations of our "innovativeness," our "adaptability," our "progressivism." But troubling realities of industrial and agricultural restructuring, racially disproportionate incarceration, and declining funding for public education made such claims ring hollow.

Into this contradictory mix of comfort and crisis comes the University. According to the Wisconsin Idea, "the boundaries of the University classroom are the boundaries of the state itself." In other words, the teaching, research, and service which originate in Madison should have as their focus the many peoples, communities, industries, and interests of Wisconsin at large. Citizens deserve to see a direct effect — more particularly, a direct economic effect (in terms of competitive advantage) — for their sustained investment in our University (even as that investment continues to drop below 19%).

I have a particular lens through which I view this idea. As a UW faculty member who studies information and communication processes — not just the technologies which enable those processes, but the laborers and consumers who enact them — I am beginning to think that the Wisconsin Idea is less an idealization of an economic production process (if the community subsidizes the academics, then the academics will increase the wealth of the community) as an idealization of a knowledge production process (if the community subsidizes the production of knowledge through research, then the unversity enacts the dissemination of knowledge through teaching, publication, and conferencing).

Understanding the Wisconsin Idea in this way, however, one must move beyond the overly-simplified communication dynamic between "academy" and "community." If there's one thing that this seminar road trip has illuminated for me, it's that in this state, neither the academy nor the community is homogenous in its origins, its approaches, its interests, or its power. Just as there are both affluent and struggling towns within our political geography, there are both well-resourced and struggling departments within our disciplinary geography. Just as the swaths of "red" counties and "blue" counties vie for power in our presidential elections, both political critique and corporate partnership can vie for prominence in each faculty member's research. And just as a wide variety of ethnic, language, and cultural groups have migrated (and continue to migrate) through the Wisconsin landscape over the last thousand years, so does our University draw students, staff, and faculty from all corners of the globe, suffused with all manner of personal philosophies and subject to all manner of public prejudices. It is not enough to simply brand both the state and the state university "diverse." The point is to wrestle with the ways in which diversities of all sorts, and at all scales, affect the processes of knowledge production.

For this reason I believe that reducing the state's plight (and the university's purpose) to one of "competitiveness" undermines the power of this diversity in knowledge production from the very start. Issues of environmental understanding, stewardship, and sustainability may not be reducible (or translatable) to market logic. Issues of cultural collision, conflict, and cooperation, while having profound links to economic power, nevertheless involve more than one's position in the labor market. And the same "high technology" that we might hope to deploy in order to attract and retain high-paying jobs cannot substitute for an informed, engaged, and media-literate political public. The life of our state is reducible to none of these single narratives. Neither is the life of the University.

Thus for me, the "Wisconsin Idea" stands for more than just extending the boundaries of the classroom to the boundaries of the state. It means extending the meaning of teaching to include publication and engagement at both local and global levels. It means extending the meaning of disciplinary research to incorporate multidisciplinary team research involving diverse groups, as well as interdisciplinary translation of research on the part of diverse individuals. And it means seeing service not only as a way of demonstrating an economic return on investment, but as a way of reminding ourselves and our many publics that investments in knowledge of all sorts — both science and art, both critique and creativity, both practical and theoretical — yield returns of their own.

Saturday, January 27, 2007

The labor of translation

My latest book project -- being copyedited as we speak and hopefully on track for a Fall 2007 printing -- deals with issues of transcoding and translating information between the modes of text and speech, specifically in the case of television closed captioning. Along the way I learned a little bit about the information labor of print translation, a fascinating subject about which further information studies and print culture history volumes should be written. So it was with some appreciation that I read an article in the Guardian today about the labor necessary to translate Harry Potter to cultures and languages around the globe:

Of the 325 million Harry Potter books sold around the world, some 100 million copies don't contain a single line of JK Rowling's prose. They're mediated by the work of other writers who set the tone, create suspense and humour, and give the characters their distinctive voices and accents. The only thing these translators have no impact on whatsoever is the plot, which of course is Rowling's alone.

The moment Bloomsbury put out their next press release announcing that Rowling has delivered book seven and the publication date has been set, more than 60 translators across the world - from Europe to South America, Africa to Asia - will start sharpening their pencils. When that first published copy appears, their race will begin.

It's a race against publishers' deadlines, of course; in certain countries, where the quality of second-language English is very high, it's a race to get the book published in (say) Norwegian, or Danish, before your entire market decides not to bother waiting for the translation, and you find that you're trying to sell it to people who've already read the book in the original.

In some cases it's a race against unofficial translators, too; in China, where enforcement of international copyright law leaves something to be desired, IPR parasites churn out their quick and shoddy renegade versions more or less with impunity. These range from fan-produced translations published online, to brand-new books in the HP series sold on street corners, like the rather peculiar attempt at a book five that appeared while Rowling was in fact still hard at work in Edinburgh writing it (Rowling shares this distinction with Cervantes, who was understandably taken aback to find the second part of Don Quixote published unofficially before he'd had the chance to get round to writing it).


As this excerpt suggests, translation is not simply a straightforward word-substitution process, in danger of being replaced by simple software algorithms, but a very human pursuit somewhere between "art" and "science". Yet it is also a pursuit constrained by the technologies and economics of printing and distribution on a global scale. (Read the full article here.)