The latest issue of the Journal of Computer-Mediated Communication has an article by Mark Tremayne et al., "Issue Publics on the Web: Applying Network Theory to the War Blogosphere." The authors offer rich descriptive statistics of both conservative and liberal blogs, and correlate blog popularity with a number of likely explanatory variables. Because blogs often provide external links as well as commentary, the authors also apply basic network centrality metrics (degree, betweenness, closeness) to those networks. The result is a clear (if not exactly unexpected) picture of the blogosphere on a critical political issue.
Screencasting is an excellent way to convey complex ideas and processes, with your computer as media source. Add a bit of creativity (and a webcam), and you can really stretch the pedagogical envelope.
Jane (and others) note the many similarities between science-as-social- process and alternative reality gaming: both are massively "multi-player" (puzzles cannot be solved by one person alone); and develop consensual ontologies and teleology ("reality" and purpose are collectively defined); and emergent communities (solutions require broad and sustained cooperation).
She promises that soon we will see "a truly popular scientific research practice that engages the global public in hands-on, brains-on collaboration, via sites Citizen Science and Amazon's Mechanical Turk and through immersive, story-driven play."
What I find interesting about this virtual tour is not the amazing new toys we'll all have, but the profound social, political, and economic implications these would clearly have for humanity. While I'm not convinced that any of these technological changes is a "done deal," I'd really like to see someone's take on what they suggest for the future of human freedom and identity. Thanks to IEET for the link.
U Illinois-Springfield and U San Francisco have developed ed-cast, billed as an "international clearinghouse for sharing lectures, conversations, speeches, and related podcasts for higher education." It's a bit anemic at the moment (I count only 41 "edcasts"), but that number is certain to grow with time.
I heartily recommend trying podcasts. Traffic in our area is horrendous; sitting in my car somehow feels far more productive when I can listen to a lecture. And audiobooks make long road trips far more enjoyable.
I've also found that students respond well to being offered a choice between reading and online multimedia for their homework assignments. Thanks to ResearchBuzz for the tip.
Addendum: I don't know if it's a compatibility issue, but I can't seem to get search to work on their website. Instead, try clicking on the column headings under "results," which gives you a complete list, organized by that heading.
Today, Augmentation is two years old. When we first started, it was merely a way to annotate and share bookmarks (a goal some might say we rarely rise above). But the task of generating regular posts* has encouraged us to stay on top of the state-of-the-art (even as we may be helping to define it), which has been an incredible learning process.
Here's to another year, and all the wondrous discoveries that await.
* This is post #700 - an average of one post every day, with about two weeks vacation each year.
Jeff Clark has an interesting post at Neoformix on graphing word frequencies. Though the technique is admittedly a first start, it does offer a fairly good gestalt of overall content. The process:
Draw ovals for each word, scaled to reflect its frequency
Connect words appearing consecutively in the text (not counting 'stop words')
Discard ovals of words appearing fewer than 9 times
Position ovals with a spring-embedded algorithm
What I find most interesting is that with the possible exception of Step 5, each is easily accomplished using common or freely available software (e.g., MS Word and Excel for text processing, Pajek for network visualization).
Addendum: Actually, the entire process is easily replicated; most steps can be partly automated using "Search and Replace" and COUNTIF functions in MS Word and Excel, respectively. After isolating high-freq words, bold them in the original document and visually scan for adjacency, deleting "extraneous" (i.e., low-freq and non-adjacent) terms. I found the graphics were easier in UCInet, an inexpensive SNA tool that is both appropriate for smaller networks and easier to use.
In discussing this with a colleague, we noted that it might be useful to define "adjacency" differently (e.g., sentence or paragraph). This would be easy to accomplish with visual scans, though clearly has scaling limits. However, if you convert the original document such that "adjacency units" become separate rows, it is possible to combine Excel's COUNTIF and SEARCH functions to produce word frequencies. If the first column is the original text, you can count up to 255 separate strings across the others. A bit of creativity, and you can automatically identify specific pairings.
While the network graphics are interesting, I suspect it would be more useful to compare measures such as centrality and geodesic distance between specific terms across multiple documents.
From national security to IPR, demand for increasingly pervasive surveillance technologies continues to grow, fostering innovations only seen in some of literature's darkest nightmares:
Last month, Techdirt reported that NYC mayor Michael Bloomberg is working on a plan to allow New Yorkers to send cellphone pics of crimes directly to the NYPD (after a similar experiment in Malaysia). Reality is television now.
But according to Technovelgy, Hitachi just announced the world's smallest RFID chip; at 0.05mm on a side (about the width of a human hair), it may be possible to just "dust" people to track them. I wonder how tempting that might be to those who feel threatened by public protests.
Also from Technovelgy, a mobile automated license plate recognition systems is now being tested in British Columbia. The "always on" system checks plates against a database, alerting officers when "hits" are found. It's not a new technology, but adding mobility would seem to introduce additional concerns.
Finally (for this post, at least), British and German engineers are developing an on-board surveillance system for airlines that uses tiny cameras and microphones (embedded in the back of every seat) with a pattern recognition system to look for "suspicious behavior" by anyone in the main cabin. The system identifies blinking, lip licking, hair stroking, and whispering as "classic symptoms" that someone has a secret. I wonder what that might do to their business class sales?
I especially recommend the section between 8m 30s and 25m, where Cory speaks directly to such issues in his own livelihood, the history of political economy, and visionary fiction.
The folks over at Data Mining... have posted an interesting note on the problem of using the word "democratization" to describe the increasing ease of participation in a variety of cultural and scientific processes.
"Please! No More Democratization" points out that while tech may be eroding some barriers to participation in some processes, this does not necessarily equate to democratic governance. While I might quibble with the author's characterization of democracy (which focuses on republican forms), many of these experiments are more properly understood as anarchy. The absence of persistent roles and rules, of norms for choosing between alternatives and resolving conflict, means that these protean "societies" are not institutionsin any meaningful sense.
Anarchy has benefits, but also costs that severely limit its practical and normative desirability, as anyone who's tried to participate in open-access, free-for-all discussions (or opened their email) must understand in their bones. This is why we increasingly see practices like user profiles and content flagging or screening. True anarchy - especially when paired with anonymity - makes genuine, civil conversation impossible to sustain.
Transparency, responsibility, oversight - these are all concepts traditionally associated with democratic governance. But while there are examples where these concepts have been institutionalized in some online communities, there is no guarantee that such practices will always and honestly reflect anything like a democratic will. The developers and distributors of applications and infrastructure may just as easily create the illusion of freedom and fair play, even as they systematically bias the Internet towards their own agendas.
The essay is a thoughtful meditation on the nature of intellectual and artistic inspiration, and the increasing holes in traditional norms of copyright (and copywrongs). It's a fairly lengthy essay (8,000+ words), but I encourage everyone to stick with it until the end - I doubt you'll be disappointed, and bet you'll be surprised.
I didn't notice any latency issues myself, but Wired AP is reporting on a large, sustained attack on three of the Interweb's DNS root servers yesterday.
This is the largest attack in five years, but perhaps even more interesting is that one of the targeted servers is run by the US DOD. The story is still unclear (publicly, at least), but it seems to have been wholly unsuccessful.
Last week, Myron Gutmann (ICPSR Director, U Michigan) spoke on Preserving At-Risk Digital Social Science Data. The Data-PASS project seeks to identify and archive opinion polls, voting records, surveys, and other social science data that might otherwise be lost to the vagaries of time.