Buddy's personal timeline, a place to collect and share things from Buddy's life.
Created by ontoligent on Aug 19, 2008
Last updated: 10/12/10 at 11:59 PM
Buddy L. has no followers yet. Be the first one to follow.
ontoligent: "Land ownership means ... the right to engineer the experience of other individuals." http://bit.ly/chja3P @joguldi talk next week at UVA SL
ontoligent: ReadWriteWeb: "ResearchGATE Offers Social Networking for Scholars and Scientists" http://rww.tw/d5V7EJ
ontoligent: RT @digitalhumanist: I've accepted an Assistant Director position @ the Maryland Institute for Technology in the Humanities (@UMD_MITH)! ...
ontoligent: Surprised by how much Lévy-Bruhl's _Notebooks_ read like a blog. Each entry is both reflective and complete, informal yet public. #anthro
ontoligent: Alain Baioiu and Six Sigma both use mathematical formulae as talismanic signs to advance a social model . Math becomes ideology.
ontoligent: @divbyzero Saw your book (Euler's Gem) in the UVA bookstore today.
ontoligent: Wesch is always fresh, surprising, unmediated #edui
ontoligent: Dang -- Adrian Mackenzie has taken the word "transduction" and made it a franco-decon thing. Very interesting, but not what I had in mind
ontoligent: RT @pchop: Public domain EPUB downloads from Google Books: http://bit.ly/IbJhO
ontoligent: The worst span is the kind you have decide is spam.
ontoligent: "It was a pleasure to sit down ... and actually read works that would otherwise get Zotero’d into oblivion" http://shar.es/FKkL
ontoligent: Mastering the Art of French Cooking also looks like a good model of how to write programming books. Its organization and focus on method
Describe the separation of levels -- surface, ontology
Get your notes on this ...
[Perhaps write in the style of Borges ... "I have just created a word, ontosis. I don't know yet wht it means fully, but it refers to the concept of meaning creation ..."]
The creation of meaning
Relevant to ontologies -- how categories get creating
Ontologies are "achieved"
Ontologies are "resources"
Still, they concern the Real
CONNECT TO TRANSDUCTION
-- Review Goodman on the problem of inducation and similarity
He described a problem, but has no solution, being happy with categorical nominalism and art
But art is interesting -- will come back to that -- allude to its value
-- Kripke on reference (from "Naming and Necessity")
The importance of this work is still being unpacked
It runs the against the grain of anti-essentialism and so cannot be "heard"
Regarded as extremely powerful within philosophy, but curiously neglected outside of it
I argue that Kripke's theory of naming addresses Goodman's concerns about induction and classification, and further, that Goodman's valorization of art provides a path for an empirical investigation of how this works. For art, generalized as symbol and ritual, provides an abundant store of comparative material to explore this connection.
The connection between Goodman's nominalism and Kripke's essentialism turns out to be Levi-Strauss
-- Compare to Levi-Strauss's science of the concrete
-- Compare to your own theory of "ostensive analyticity"
-- The role of discourse: the theory of etymons: punning in societies without writing
-- Examples form Turner, Gell, Sahlins, etc.
-- Ricoeur: non-ostensive reference
ontoligent: @edwebb According to Renee Girard, dangerously so, if it goes in a certain direction. Otherwise, according to Dawkins, it leads to world dom
ontoligent: WordPress does Twitter http://p2theme.com/ (thanks to @selfreliant)
ontoligent: RT @GeoffRockwell: Death 2.0 - http://tinyurl.com/mhetgn - what happens to your online identity
ontoligent: Soft crab sandwich ... mmm
ontoligent: Of zombie projects, Omeka, and anthropology. My title, read closely. http://m.yd.sl.pt Some may remember Fabian from *Time and the Other*
Describe the separation of levels -- surface, ontology
Get your notes on this ...
Example of comparative ontology
Review architecture -- textual surfaces vs ontology (describe this in another post)
Review Mary Douglas' book
ontoligent: http://twitpic.com/931m4 - Just arrived
ontoligent: I'm now officially Associate Director of SHANTI at #uva http://shanti.virginia.edu ... work cut out
ontoligent: http://twitpic.com/7jvqt - Spotted this two-masted square rigger on the Bay ...
When an instance stands for a class ...
When I get the time, I'm going to write a vocabulary creation language to support structuralist text interpretation. It will consist of two specs: one to handle the marking up the surface features of text, such as rhetorical figures and tropes. This will be based on my work with the Princeton Charrette Project and it will likely incorporate some ideas from Steven Bird's work on annotation graphs. The second will be either an extension of or a variant of SKOS and|or OWL designed to represent extracted symbolic structures. It will incorporate predicates to handle relations of signification, such as has_part, has_analogy, and has_metonym, between the elements represented in the first language. At a larger level, I want to represent holistic dimensions such as context and level, as well as narratological things like encompassment, transformation, inversion, and liminality.
One of the big problems I see in this project is an apparent limitation in RDF to support triples about triples. For example, an analogy is a relation between structures, not terms. The assertion A : B :: C : D is, at minumum, an assertation about the relationship between two assertations, A : B and C : D. (The predicate of the assertions themselves is usually X has_part Y.) An anology looks something like this then:
[A has_part B] just_as [C has_part D]
The easiest way to accomplish this task would be to provide URIs for each RDF triple. I haven't seen a general solution to this problem. I know I can create local URIs within a specific triple store, and use these in triples. But I need to define an RDF triple as a datatype first. And I anticipate problems further downstream; I wonder if the current RDF toolset is designed to handle indexing and inferencing of these kinds of triples.
If anyone has suggestions about how to handle this issue, I'd be glad to hear them.
After writing this, it strikes me that to say that two triples are analogous is just to say that they share a predicate--so long as that predicate is sufficiently specified. To assert an analogy, then, is to assert that such an identity is important or relevant in a certain context.
ontoligent: Nice evening at the pool, ran into @edwebb and @f_rancesca, looking forward to film with @RennieM
ontoligent: RT @sramsay The pivotal moment in Zotero v. Thomson Reuters came when @dancohen stood up and said, The truth? YOU CAN'T HANDLE THE TRUTH!
ontoligent: What I find ironic about Wesch's information abundance thesis is the dearth of ethnographic material on the web.
-- Want to define things about assertions (e.g. who said it, and where)
-- Want to make complex assertions (e.g. indirect objects, etc.)
-- How to do this? Create a schema language with parts of speech?
-- But doesn't this break the model?
-- Speaking of the model: predicates are overloaded
-- recursive rdf
-- how would it be analyzed?
ontoligent: Discourse analysis, with QDA tools, connects praxis (situated action) to ontology.
ontoligent: @jimgroom It's your *mana* that rubbed off on him
ontoligent: In constrast to traditional CS, the focus is not on optimization, etc., but on "interpretation support" (as opposed to "decision support")
ontoligent: @jimgroom goes to the core -- it's about this thing called "mamagement" and the bureaucratic planning model embedded therein
The problem--current ontology and vocabulary definition languages derive from set theory (OWL) or a thesaurus making (SKOS)
RDF itself is too closely tied to the card catalog model
Humanists find these approaches limiting
We want to capture the use of figures and tropes in texts
Structuralism is well suited to providing a framework for a complementary vocabulary
Focus on relations, not elements
E.g. opposition, encompassment, analogy, etc.
Comment--RDF verbs bury this important dimension
Contexts and levels
Vocabulary for defining rules of combination for reified relations in other vocabularies
A problem--figural language is (1) constructed out of literal language, and (2) contextual
Nevertheless, it makes sense to speak of an inventory of figures and tropes
And, we want to produce documents that capture interpretations
Comparative ontology asserts that humans already have ontologies, and that machine ontologies are both projections of human ontologies (those of the numerati) and material agents that intervene in the ongoing reproduction of ontologies (everyone else's). Developers of ontologies for the web of linked data would do well to understand the nature of human ontologies, as well as they way machine ontologies intervene in the ongoing construction of social life.
Human ontologies are not like ROM programs, hard-wired into our brains and executed without modification; they are designed to be reprogrammed through engagement with the world. They are one of our most effective adaptive traits.
Ontologies are adaptive
Anthropologists have studied ontologies in the wild for a long time, under the various categories of "structure," "symbolism," "culture" and "collective representations." One of the most important contributors to the study of ontology is the American cultural anthropologist Marshall Sahlins.
Sahlins began as a cultural materialist but had a road to Damascus experience in the 1970s in which he got culture. You may recognize his name as the unfortunate target of fellow anthropologist Gananath Obeyesekere, who criticized Sahlins' interpretation of the events leading to Captain Cook's death in Hawai'i as orientalist. In fact, Obeyesekere's criticism was an exercise in occidentalist stereotyping and, in any case, Sahlins control of the material eventually proved his critic's position incoherent.
Sahlins' principal theoretical contibution to cultural anthropology has been to retrieve the concept of cultural structure of the ahistorical, formalist, and mechanistic conception developed by Lévi-Strauss, whose own work on mythology belies his more theoretical pronouncements. Rather than separating structure from event (and history), and locating the former deeply within a universal mind--like a camshaft responsible for the jigsaw puzzle of culture--Sahlins focuses on what he calls the "structure of the conjuncture" of structure and event. History emerges as a culturally distinctive second-order structure that results from the ongoing work of categories in praxis. So categories have a structure, but that structure undergoes reevaluation and change as it is applied to the world.
In this, Sahlins is consistent with both Victor Turner's understanding of processual structure in ritual behavior, and Bourdieu's concept of the habitus which mediates, through improvization, the "dialectic of objectification and embodiment." In fact, I believe that the revised structuralism developed by these anthropologists (and others) is coherent enough to deserve a name; I call it "neostructuralism."
In Islands of History Sahlins describes the process of cultural (ontological) change in terms of the "risk of reference": as cultures classify things in the world--as they deploy ontologies--they also put these ontologies at risk. For things in the world do not always behave as classified, or planned. Even the sun has an occasional eclipse. Although the keepers of culture--from priests to grandmothers--try to enforce adherence to the categories, the behavior of things will inevitably contradict the categories and call for their revision. Sahlins reads the Hawai'ian's classification of Captain Cook as Lono as just such a world changing event.
Ritual is one mechanism humans use to synchronize the world with world view. As people grow, for example, and change statuses, rites of passage are used to mediate this "contradiction" and reclassify people so that they can fit into the system. Another mechanism is prophecy, where the reverse is true--world views are aligned with a world that has changed. Millenarian movements are the classic example of this: a prophet emerges who can make sense of the new in terms of the old, but changes the old in the process.
Rituals and prophetic movements are the original forms of change management.
This is the ongoing work of culture. Cultural reproduction is never mechanical. That is one reason we humans have history. There is always a disproportion between words and things, plans and situations.
Texts, as forms of discourse, can be likened to rituals and prophetic movements. Novels in particular are efforts to both makes sense of an influence the world, a task in which they often succeed. They deploy a set of categories that make sense, to the author at least, in a certain time and place. The risk of reference works at various levels--from the basal meanings of words out of which tropes are created, to the description of scenes in which the unsaid is shared among a presumed audience, to more elaborate allegorical mappings of fictional characters to real persons. But the referrential risk of textuality is compounded as the message is removed from its original personal, cultural, and historical contexts, and the world of the text is forced to fit new contexts for new readers. Hermeneutics arose as a method to retrieve meanings lost in this way; Roman Law and the Christian Bible being two major examples of distanced texts being applied and reapplied to new situations. The French philosopher and hereneutic theorist Paul Ricoeur called the result of this risk the "surplus of meaning" in a text, and saw it at as an opportunity for a kind of ontological excavation.
Databases (and the point of this post)
Now, a data model, such as a set of tables and fields in a relational database, an XML schema of elements and attributes, or an RDF vocabulary of classes and properties, is a plan, a schema of classification. And database applications, like rituals and texts, have their own forms of referential risk to contend with. They classify the world and, in the process, both effect the world they classify and open themselves up for revision by that world as it changes.
For example, the categories produced by a requirements elicitation process for an application designed to improve some workflow, and encoded in a database that sits at the bottom of an application stack, may not accurately represent the workflow as it is actually practiced, and as it will inevitably change as new developments take place--changing personnel, clients, strategic plans, etc. The database, then, is put into a situation--the situation of the conjuncture--into which its categories are at risk.
In this situation, databases are like texts--they are built on the armature of a hard-coded ontology, and they can move beyond their original domain of applicatibility.
But unlike most texts, and very much like sacred texts, database applications (and their administrators) are usually given a central position within an organization. They are often deployed as key elements of an enterprise architecture that calls the institutional shots. Thus they can insulate themselves from referential risk. They can force conformity to their logic--as Michael Wesch's New Guinea villagers redesigned their settlement pattern to conform to the government census--or they can produce a black market of behaviors in an organization that bypasses the database governed workflow. This is what faculty do who are forced to use an LMS but would rather use Google Docs.
Comparative ontology can help here. If we view ontologies as always situated, then we should (1) design systems for maximum flexibility and adaptabilty, and (2) learn a lesson from the ritual life of peoples around the world and throughout history: engage our ontologies in constant reevaluation and modification, making the world (of our organizations) fit where appropriate, and also refining the categories to fit the world.
To meet the first challenge, we shouldn't create overwrought ontologies, but rather focus on just enough classification to achieve the effects we need. Usually, the effects we are most concerned with are connecting people to people, people to information, and information to information, in as few links as possible.
To meet the second challenge, we may want to refine what we mean by "social operating system"--for that is precisely what a ritual system is. Maybe it's time to follow McLuhan's advice and exploit the ritual effects of the electric, in order to mitigate and shape the more dangerous effects of the electronic. When we build ontologies, maybe we should also be thinking of the physical and virtual spaces in which they will be deployed, and the material and digital artifacts that will be their vehicles of expression.
ontoligent: Nice @savageminds post "Towards an Ontological Anthropology" http://tinyurl.com/qsl72c
I am all for user-driven design methodologies. My instinct is to distrust the Central IT ethos of "we know better" because "we think more rationally about things" and all that. That perspective is based on a simultaneous over-valuing of a linear, rational notion of process ("planning") and a grudging acceptance of user behavior as "cultural" and therefore outside the scope a requirements gathering process.
The term "non-functional requirements" speaks volumes and captures the Central IT attitude very well. Under that category, the whole point of effective software design is swept under the rug. We know that software will be most effective when it adapts to user behavior and vice versa, but we often sidestep that issue, hoping for incremental, evolutionary changes to produce the desired effects over the long run. We miss the opportunity to innovate, leaving that to the less timid.
But I also find that user-centric methodologies are based on naive assumptions about what users want, or who The User is, or what the point of the user research is in the first place. Unless you have a very restricted audience for your software--and admittedly one often does--it is very difficult to translate the views of a few people, whether captured by focus group, survey, or even participant-observation, into generalized principles for an application. Ultimately, good design is what works, and we retrospectively attribute success to our process. But we really have no clue.
What is it that one is capturing by user-centric research, anyway? The attitudes and dispositions within a class of individuals? This can't be it. User attitudes and mental models are highly variable, and they are mutable because humans are adaptive, more than we think. If you build software based on some static notion of what users want, what they say they want, you will miss the effect software has on redefining what they really want. This is because users inhabit cultural environments, and software inevitably has effects on those environments. If you focus too much on the abstract user--what's "in" the user--you will often have the feeling of the goal posts moving. Or you may end up dismissing the user altogether as fickle and irresponsible, and go on with your own design ideas. If you design software for a living, I am sure you know what I am talking about.
I think the proper focus of user-centric software design has to be the user-in-context. That is, not the user but the Situation. But sutuation defined in a specific, rigorous way. Situation as the objective, institutional framework of power and infrastructure in which people work. This is difficult terrain to study, hedged in as it is by all sort of taboos and misrecognitions that keep the social gears moving. Let me give you an example.
One of the areas where the Central IT software design ethos dominates is in the area of document management. Two factors drive the design of solutions: (1) developers assume (know) that paperlessness is a Good Thing, and (2) the paper-based workflows that users are enmeshed in are so crufty, complex, and idiosyncratic that it is impossible for users to describe them in enough detail to re-engineer them. The result is that the digital document management solution will almost always build around people's behavior, or else it will break workflows where it has to. So, instead of stepping back and rethinking what the data flows entailed by a paper form entail, or taking advantage of the metasocial moment and asking Why are We Doing This in the First Place, document-logic is reproduced in the software. The efffect is not to reproduce the old way, and make it more efficient. It is something unpredictable and bound to have hidden consequences, not all of which can be good. Most likely, we've preserved the notorious stupidity of bureaucracies and have ensured its continued survival in a mutant and more powerful form. Because once categories get encoded in institutional databases, the tail wags the dog. Think health insurance.
So, what to do? I suggest that we pursue theory-driven design. We actually try to make sense of the sociology and anthropology of bureaucracies and operationalize the best ideas in these discourses as design principles. We think of how software behaves as an assemblage of artifacts in a living cultural environment. This is not social engineering, nor is it to tread the tired path of "organizational behavior," a field that is too closely tied to the executive perspective. It is to pursue a rich, empirical understanding of software in the wild, or at least, the office.
Theory-driven design is not anti-empirical. It is the opposite: for a good theory generates testable hypotheses. It gives a framework to user-centric research beyond the unanswerable quest for what users really want. As they say, there is nothing so practical as a good theory.
A good starting point might be to take Ted Nelson's ideas about documents and hypertext and combine them with, say, David Graeber's critical anthropology of bureaucracy. Not to condone the anarchism of Graeber, but to lever the authenticity of perspective he brings to a discussion about the role of documents in the organization. Reading his essay, "Beyond Power/Knowledge an exploration of the relation of power, ignorance and stupidity," it is hard not to believe that a radical rethinking of the document, and document-logic, would not benefit from his perspective.
ontoligent: It's true, though, IT follows document-logic because clients do
ontoligent: http://twitpic.com/4syq6 - This is @pchop (Mark Wardecker), about to give a talk, "From Pulp to Pepla," to some Dickinson College studen ...
KM based on the (tacit) assumption that tacit knowledge and explicit knowledge are similar in form; one is hidden and the other shared.
But in fact what we mean by knowledge is a written document.
But is knowledge fixed like this? Or is it an emergent property of situated minds?
I say that knowledge, as a document in the head, does not exist.
A document is a performance.
A situation: things, people, etc.
Even fixed knowledge has to be constantly reformulated.
See Robert Scholes's definition of the humanities
Ever had this happen to you--you try to remember a phone number, but you can only recall it when you see the dial. What is going in here?
Clearly something is in the brain and something is not.
Knowledge is peformative.
Performance is the basis of innovation.
Performance is not "execution". "Executives" like to think that, but it's not valid.
Performance is always liminal.
I Think I Like Vanilla
There is an old Far Side comic that depicts the plight of the unreciprocated lover in a characteristically humorous and insightful way. In the top half of the cartoon box, a boy lies in his bed at night worrying, in a thought balloon crowded with words, if the girl of his dreams loves him. In the bottom half, we see the girl resting in her bed, pondering: "I think I like vanilla." Ouch.
I sometimes think this comic captures the relationship between my profession, academic technology, and the faculty we serve. Attending a typical professional conference, one encounters a gleeful and buzzing world, filled with unabashed optimism about the role of technologies such as Twitter, Second Life, and World of Warcraft as transformative forces not only for education, but for the world at large. This optimism seems unaffected by the eternal recurrence of workshops, talks and panels on topics that address our inexplicably unrequited love: the disparity between how much (and it what ways) professors actually use academic technology, and the intentions, goals and programs of academic technologists. We even have a name for this: the Faculty-IT divide.
Here's a familiar topic of discussion that exemplifies this divide: "Information fluency programs are often seen as add-ons instead of academic initiatives and thus fall short of their transformative potential." The phrasing is typical: the ideal of transformation is juxtaposed to the reality of mere addition-"bolt-on" being the accepted trope. Such laments reflect the reality that I experience when I return to campus and oversee the real world tasks of supporting media production, course management software, training, etc. In this world, it is as obvious as it is pervasive that faculty view instructional technologies as a supplementary resource choices, and rarely consider their impact on teaching and learning beyond their immediate logistical effects. Most faculty I know are perplexed or appalled by the idea that academic technologists are interested in radically transforming how faculty members teach--indeed, what they teach--a response that is not diminished by familiarizing them with the theoretical currents that motivate this ambition--constructivist epistemologies, theories of distributed cognition, etc. Faculty, it seems, are thinking they like vanilla.
It's not that technology doesn't have a significant impact on teaching on my campus--it certainly does, and that impact is growing. Nor is it that many professors do not genuinely embrace emerging technologies and some of the theoretical baggage that goes with them. Wikis seem particularly in demand these days, having caught the attention of many, as a kind of metonym for Web 2.0 technologies as a whole (even if Wikipedia continues to be disparaged). The problem is that the nature of the relationship between teaching and technology on my campus (or on any where I have worked), does not match the vision of transformation that characterizes the discourse of my profession, a discourse in which the widespread adoption of a technology as radical as Second Life appears simple, logical, and obvious. I believe there is a certain amount of denial, or perhaps misrecognition, in this discourse, and its transformational visions, of the fact that one's love interest does not return her calls.
The Next Great Transformation
When academic technologists use the word transformation, they reference a folk theory of history that goes something like the following. For any society or nation, a sufficiently long time span will be characterized by one or more revolutions that radically and permanently transform its fundamental institutions. In the West, two such revolutions are the secularization of world view initiated by Copernicus (and helped by Luther), and the economic and political transformations associated with the Industrial Revolution, described in the writings of Tocqueville (democracy), Marx (capitalism), and later Polanyi (the market). Wrapped around our understanding of these transformations--and despite the best efforts of postmodernists--is a grand narrative of progress, or, as we now like to say, evolution: In each case, the inhumanity witnessed at each juncture notwithstanding, we perceive a movement from less freedom and knowledge to more. And we like that.
Now, it is a particular conceit among media theorists from McLuhan to Manovich, and the multitudes of information technologists who, by a process of professional osmosis and elective affinity, instinctively agree with these theorists, that the real revolution that kicked off the whole enlightenment thing was the Gutenberg revolution. Without the printing press and the drastic reduction in the social cost of information it introduced, neither Copernicus nor Luther could have accomplished what they did.
And that is how we view the Internet and the Web. It too is kicking off a series of revolutions and transformations that will fundamentally change our world view and our political, economic and religious institutions. And, of course, our educational institutions as well.
I call this the Next Great Transformation thesis, adapting Polanyi's phrase. This particular understanding of history is not new--it has obvious roots in Toffler's Third Wave, for example. What distinguishes the NGT thesis is its strong adherence to, and faith in, a more or less linear understanding of media determinism, itself a special case of technological determinism--the belief that what drives culture change are changes in modes of communication and technologies of representation.
As technology specialists, we tacitly embrace the Next Great Transformation thesis, and we tend to view its effects on academia as positive, natural, and inevitable. We believe we have a mission, on campus and beyond, to close the gap between the new informational order and the old industrial one. Although we are too politic to say it (being aware of where our paychecks originate), we often regard those of our clients who resist closing this gap as at best uninformed and at worst counter-productive Luddites. We see ourselves as champions of innovation and as liaisons between the stodgy culture of the bookish academic and the dynamic pragmatism of the Millennials and the Net Generation. Our sympathies decidedly slant toward the latter. We regard students as the sources of their own learning, and consistently espouse a constructivist epistemology that locates innovation in the student's mind and in her social activities. Our role is to build technology-enabled contexts for this magic to happen, and we want our faculty clients to play along.
In sharp contrast, the majority of faculty with whom I have worked closely regard technology as a means to an end and as a convenience; they do not regard it as transformative in the sense understood by technologists. They share this view--especially regarding convenience--with the students they teach. Faculty who do get the transformation bit often regard it with a combination of excitement and ambivalence. On the negative side, faculty--especially in the humanities and social sciences, which have deep traditions of social criticism--instinctively regard technologies as value-laden, cultural constructs, which may transform the surface of culture but do nothing to alter the deep structures that shape core institutions such as the family or the work place. Where they are not critical, faculty tend to be very practical, weighing the teaching benefits against the time costs of technology use in a cold calculus that can be dispiriting for the technologist to witness. And because digital work remains a liminal activity in the eyes of peers and professional organizations, tenure-track faculty take a risk in pursuing it.
Now, aside from the relative merits of each view, there is a great irony in the contrast between them. Whereas advocates of e-learning pedagogies universally espouse a shift toward constructivist, participatory learning models with respect to students, their implicit model of technology service toward faculty is mostly passive. So often the goal of a training workshop is framed as facilitating the transfer of progressive technologies and practices from the Internet and consumer technoculture--social software, podcasting, text messaging, mobile computing--to the technically conservative world of academics, whom we caricature as practitioners of a dated "chalk and talk" and "sage on the stage" pedagogy.
Our vehicles for teaching these technologies mostly reflect this view. So often we seek to transfer our knowledge in workshops with low or sporadic attendance, summer institutes that entice faculty with stipends, or to self-selected walk-in traffic at a centralized resource, far removed from actual situations of learning or research. Departmental liaison models seem more effective (for schools that can afford them), but often the technology specialist, alone in the field, has difficulty seeing the forest for the trees. In short, we do not model our ideology where it would seem to matter most--in teaching teachers how to teach with technology.
Of course, we would love to model what we teach with how we teach faculty. But we have a hard time finding a purchase when the social software message falls on deaf, or even hostile, ears. For what is at stake in teaching faculty about new technologies, and which we technologists often have a hard time appreciating, are the long term investments that academics and librarians have made in particular modes of knowledge production, even as new modes of scholarly communication are taking root. These modes involve more than simply the representation and communication of knowledge through books, journals, and conferences--they involve the entrenched social facts of tenure, prestige, authority, and recognition that drive the academic economy. In short, for all of our faith in the importance of information and the ideal of communicative transparency, the social life of academic information dictates that our technologies, if they are ever to take root, will have to address the level of power. And that is an area where we fear to tread.
Exceptions to the Rule
The great exceptions to this passive model are in the areas of e-science, humanities computing, and other fields where digital primary sources have become mainstream. Faculty in disciplines such as bioinformatics, geography (GIS), classics, and text criticism have, on their own terms and often independently of official IT, invested in technologies to manage and visualize the primary and secondary sources on which their fields are founded. These faculty represent the opposite problem to technology support personnel: they do not need to be shown how technology can be an effective teaching tool because they instinctively understand its value as a mode of representation and communication. When they come to central IT for help, they tend to know what they want, and they are quick to adopt --and adapt--technologies to their needs. In many cases, they have a lot to teach us.
One thing they do teach us is something academics have taught us for centuries: content matters, and changes in medium matter--are revolutionary--precisely to the extent that they allow us to represent, communicate, and develop content more effectively and efficiently. Without real effects on how scholars produce and interpret difficult ideas, they remain useful primarily as vehicles for entertainment, advertising and ideology.
I think these research-based modes of technology use--what I broadly classify as "digital scholarship"--can serve as a resource for framing a different model of academic technology support, even in the areas of teaching and learning. I believe that a digital scholarship model can help close the information literacy gap and usher in a genuinely transformative mode of education that would have effects in both directions--on the culture of teaching and learning, and on the culture of technology (which we tend to take for granted).
Where Leadership is Needed
So, we should do two things: (1) nurture these developments -- through development of cyberinfrastructure, etc., and (2) study them. Ethnography of scholarship ... follow the lead of Nancy Foster,
There are signs that things are changing. The recent efforts to redefine the technology needs of the academy (as opposed to its administrators) around the concept of cyberinfrastructure is to be commended ... the Bamboo project ...
But for this to succeed, we need a particular kind of leadership [blah blah blah].
- Cyberinfrastructure is key - but will fail if not part of a realistic vision, based on the core values of the liberal arts . Mention the Bamboo project ...
- Digital content - Librarians
- Media fluency - Academic Technologists
- Situated Pedagogy - Ethnography of scholarship, e.g. Nancy Foster (University of Rochester) ..
I suppose it is the prerogative of different generations to simultaneously dismiss and retrieve old ideas by introducing new words for them. I have in mind words like "metacognition" and "knowledge management." In both cases there is an existing word that more or less describes the referent of the new(ish) word: epistemology and education respectively. Both metacognition and epistemology refer to, roughly, the activity of "thinking about thinking," and the core mission of education is the management of knowledge -- producing it, storing it, reproducing it, etc. However, in each case, the intent of the new word is clearly different from the older one, and this difference can be attributed to a different organizational context: knowledge management is about education and research in corporate settings (now defined as "knowledge producers"), as opposed to society or the world at large, while metacognition has flourished within the relatively narrow context of academic departments of education.
But why the complete absence of the old words in the new discourses? Why not call metacognition something like "applied epistemology"? Or knowledge management "corporate education" or "corporate teaching and learning"? It can't be for lack of familiarity with the older words. Nor can we assume that the newer words are more "sticky" and easier to use; that just begs the question. I think it's clear that the problem with these constructions is their connotations: they carry too much semantic baggage.
But now here's the thing: the new words do not simply stand alongside the old ones, they actually seem to take their places. The new words take the place of the old words at an abstract level--but then replace the implicit social meanings in the process. The effect is to implicitly usher in newer or different institutions in the space reserved for the old. So knoweldge management is about education, yes, but education in a business setting where knowledge is viewed as a competitive advantage, not a general good for the betterent of humankind. And, eventually, this will have implications for education itself, as essays like "Applying Corporate Knowledge Management Practices in Higher Education" become more common.
Similarly, metacognition is about epistemology, but not as an abstract philosophical concern, nor one tied to the remote activity of a purely scientific enterprise as it once was; it is epistemology in the service of classroom teaching and learning, where the users of the word no doubt think it belongs. So the effect of the word "metacognition" is to usher out the ivory tower and to replace it with the more populist institution of the classroom. And this meaning is consistent with the current ethos of educational populism, as expressed in wider ideas like connectivism.
So language really does embody the social: these words are actually the encodings and amplifiers of social changes happening right now. Perhaps those of us familiar with the older names of things would do well to note these shifts and pay attention to their institutional commitments.
ontoligent: Whereof one cannot speak, thereof one must tweet.
Here is a Flash applet I once wrote to demonstrate (to myself) the concept of synergy. It's based on some typewriter art I once did in college while trying to avoid writing a paper. Move the black ball to the right--the number shows the degree of the angle. I know--there should be a slider bar there.
Click here for a larger version.
Here is a Flash applet I once wrote to demonstrate (to myself) the concept of synergy. It's based on some typewriter art I once did in college while trying to avoid writing a paper.
Move the black ball to the right--the number shows the degree of the angle. I know--there should be a slider bar there. Click here for a larger version.
Symbols -- from core symbols like the Virgin of Guadalupe to abstract ones like the whiteness of Melville's whale -- fix and generate ontological categories. How and why this happens is a question of deep interest to me, but that it is true seems obvious and well established. Human beings create symbols like plants produce oxygen, and symbolc formation is inextricably bound to a defining trait of human beings -- rich, discursive, and always already metacognitive language (human beings have always talked about talking, a practice that must be regarded as intrinsic to human language). Linguistics up to know, wedded as it has been to a Chomskyian Cartesianism, has missed this role, although philosophers have not lost sight of it. As Ricoeur wrote, "the symbol gives rise to thought."
The relationship between language and symbolism is complex and (still) not well understood. My own view is close to that of the (admittedly discredited in its original form) generative semantics school, associated with George Lakoff. I believe that categories and rules are in some way generated by the transduction of meaning that takes place between neural representations of concrete objects. Discursive language -- not "deep grammar" -- tries to fix these meanings in propositional form, but the symbolic substrate has a dynamic quality, in no small mesure do to its adaptative nature in response to what Merleu-Ponty called the "primacy of perception."
With writing and then printing, and the monopolization of explicit knowledge, in the form of written records, reference works, etc., by governments, universities, etc., the relationship between discursive fixation and embodied symbols becomes tenuous and contested, resulting in a mind/body problem unfamiliar to ritual societies.
In any case, a number of practical observations follow from this tenet, which I will quicly enumberate, and hopefully take up later:
Human ontologies are not plans.
Human ontologies are overdetermined. That is, there is always more than one way to express an ontology The fixing of meanings will always fail if the goal is to create non-overlapping, non-redundant descriptions.
Human ontologies are rhizomic. In their natural form, ontologies are not hierarchical. Rather, the hierarchical representation is one form of serialization that works well because of its analogy to kinship (see Durkheim and Mauss, Primitive Classification).
Human ontologies are local and situated.
Human ontologies evolve.
Just as institutions require shared ontologies to function, so are institutions involved in creating the categories that nake up ontologies. Admittedly, assigning agency to institutions poses a number of questions that need to be answered; I won't attempt that here. Essentially, I follow Mary Douglas (see video below), especially her How Institutions Think. The categories and rules that comprise human ontologies follow and enable a practical logic that in turn enables sustainable human interaction.
A corollary idea to this is that ontologies exist (in large part) to mediate social action. They are the result of human beings' mutual calibration of individual cognition through collective interaction. Typcially, this calibration takes place through ritual. But media (old and new) -- which grow out of but eventually displace ritual -- also take on this role. (McLuhan's frequent reference to ritual to describe the effects of new media is telling.)
"Social Life Makes the Categories"
The late Mary Douglas being interviewed in 2006
about her book Purity and Danger.
See the full interview and films at ScienceStage.
This is the first in a series of posts in which I define some of the tenets of comparative ontology, in order try to flesh out its significance to the work of making and using RDF vocabularies.
The overwhelming conclusion to be drawn from the ethnographic record is that human beings are surprisingly structured in their thinking and behavior, even when that behavior seems to be random and non-linear. Although it is a commonplace to observe that social life is inherently messy, unpredictable, and resistant to capture by physics-like laws (post-Einsteinian included) it remains true that patterns of culture are remarkably widespread and persistent. Languages, marriage practices, calendar systems, gift exchange systems, markets, etc. -- essentially, any functional human institution -- all rely on shared categories and rules to operate, and these are discoverable and describable. The mistake of the structuralists was to conceive of these categories and rules as logical in a formal, almost scholastic sense, like a plan that agents follow strictly. Instead, it is more likely that they exist as dispositions that constrain behavior and encourage improvization, as Bourdieu describes in his idea of the habitus (which he got from Mauss, by the way).
To the extent that formal RDF ontologies are meant to mediate human-computer interaction (and not simply allow computers to share information), ontologies should be designed to interdigitate with the categories of their human participants. Machine ontologies should be interoperable with human ontologies. They should be designed to encourage the symbiotic development and evolution of human collective representations (to use Durkheim's expression), given the role of the networked computer and computer network as an institution in its own right.