Tag Archives: surveillance

The Work We Have Always Needed to Do: A talk for LVAIC

Image by Pexels from Pixabay

Abstract

“It might well be that at this point it is a cliche to point to what our experiences with COVID are teaching us and say ‘this was always the case, it is just even more apparent now.’  The struggles we encounter as teachers, students, and library workers confined to online environments are versions of struggles that existed already in those environments (but might not have been so widely felt), and also that were always the case in physical environments.  When we talk about the need for engagement, when we wonder what that looks like in Zoom, it bears remembering that those questions were relevant in classrooms and lecture halls.  This extremely online time in education is forcing us to ask, what is a teaching environment?  What is learning?  What is a library?  Where are the people?  Too often the easy “solution” offered to those concerned about engagement and interactivity are those of edtech surveillance, and the alleged promises of AI.  I want to talk about those promises, and the problems of reducing teaching, learning, and research to the numbers offered by edtech and library systems.”

That is the abstract I shared several months ago with the fine and kind people–Kelly Cannon, Carrie Baldwin-SoRelle, and Jess Denke– who invited me to speak with the Lehigh Valley Association for Independent Colleges group, for their symposium for library workers and faculty about information literacy.   The talk I ended up giving–and try to capture here– had some distance from that abstract, but I did end up talking at least a bit about surveillance, care, and our responsibilities to our colleagues and students that well pre-dates the pandemic emergency.

In the interest of care I want to position myself: I am a white woman, I am Cajun, I am of a settler people, and have spent my entire life in the US living on unceded occupied land of many different Indigenous people.  I am living and writing while on Cherokee and Catawba land, in what is now called North Carolina.  I would like to point you all to https://landback.org/donate/ and encourage you to contribute to efforts to get land back into Indigenous people’s hands.  I am donating part of my speaking fee to this organization.

It’s been a long time since March 2020 and since that time, that pandemic time kicked in, I have joined a large (and I think still growing) group of people who found it hard to get anything done beyond what had to be done.  I was privileged to be able to stay home, work from home, arrange for my kid to attend his last year of high school from home, and for my husband to also be able to work from our home.  So it might sound like whining when I say how difficult everything has been with this pall of death and neglect.  More than half a million Americans have died so far, and more will die still, and so many of these deaths were avoidable, if not for the neglect of our government in 2020, and the capitalist impulses now that continue to keep vaccines away from people who need them, and continue to put people at risk, nationally and globally.

So I’ve turned to podcasts to motivate and distract myself.  Sometimes it works.  I manage to clean the kitchen, or fold the laundry, or back when the only thing I could do to leave the house was go for a walk, I would listen while walking. I get to thank my daughter for introducing me to Not Another D&D podcast, a performance and Dungeons & Dragons play podcast that I’m still listening to and is responsible for me getting back into playing the game, which I last played when I was about 12.  Playing a lyncanthropic elf ranger has been an important part of my pandemic coping.

Podcasts were a way for me to engage with something without doomscrolling, and also without reading, because my ability to sit with a text and focus was destroyed, and is only slowly coming back.  In addition to D&D podcasts I’ve also been listening to You’re Wrong About, which started off as a “debunking” podcast about media coverage and misconceptions about things like the Satanic Panic, serial killers, and the so-called “obesity epidemic.”     

It was You’re Wrong About that reminded me about Jonestown, in Guyana and the massacre of just over 900 people there in November 1978.  Their cult leader, Jim Jones, gathered vulnerable people, including drug addicts and sex workers, and also idealists and activists who believed in the end to racial segregation.   In 1977 (and here I am quoting from the article Escape from Jonestown by Julia Scheeres) “New West magazine was about to publish an exposé portraying Jim Jones—by now a celebrated California powerbroker—as a charlatan who faked healings, swindled money from his followers, and fathered a son with an attractive acolyte. It was all true.”

“Few folks know that  Jim Jones was a civil rights leader in Indianapolis—integrating lunch counters and churches—and that the majority of his victims were African Americans who heeded his message of social equality. How terribly they were betrayed for believing in this dream.”

Once in Guyana members of the People’s Temple had their passports and money taken away, and they were stuck.  Jones had been talking about “revolutionary suicide” to his followers for years, but the visit of members of the media and a member of congress spurred him to finally follow through on those plans to kill his followers (and himself).

“They drank the Kool-aid”

If you listen to You’re Wrong About, or drill down into their source materials, you know that what the Jonestown people were offered to mask the taste of cyanide was grape flavor Flavor-Aid, not Kool-Aid brand.  But that’s not the point.  The phrase “Drank the Kool Aid” suggests the beginning of a journey into misinformation.  But for Jonestown people, it was the end.  It was their death.

“They drank the Kool-aid” is  a phrase I heard a lot during the Trump administration and continue to hear with regard to QAnon during the Biden administration, and anti-vaxx, anti-science during COVID paranoia.  But that phrase, as Julia Scheeres points out in her article, is an act of erasure and injustice. 

Jonestown was not a mass suicide but a mass murder, perpetuated by Jim Jones, who lied to and manipulated people based on his public vision for racial harmony, for the sake of his private vision of “a revolutionary suicide”

Once people were in Jonestown, they knew they were trapped, and no information was going to save them.  This was not an information literacy problem

Jim Jones’ victims drank the cyanide laced Flavor-aid, and many of them were forced to.  About a third of the victims were minors, and they were poisoned before the adults were.  And that was their end.  More information was not going to save them.  Protecting them from predators like Jim Jones would have.  Structural changes that would provide health and mental health care and civil rights–which were being fought for in the 1970s–would have.  

Using “drank the Kool-aid” as the beginning suggests that the important story is that of Jim Jones, of our learning about him and how he victimized people in the aftermath of the mass murder at Jonestown.  But the people Jim Jones victimized, and isolated from their families, and took down to Guyana with lies and then trapped there, they had stories, they were part of other people’s stories, and they cried, and drank, and died, and their stories ended.  

What that phrase “Drank the Kool-aid” signals neatly is the extent to which the speaker thinks the person in question is at fault for what they believe, and what happens to them because of it.  It signals the belief that people are rational, and that we might, if we give people enough of the “right kind” of information, prevent them from drinking the Kool-aid.  

Information alone cannot save us from the problems of QAnon, or science denialism.  Exposure to peer-reviewed articles will not necessarily debunk conspiracy theories about vaccines, because people do not encounter information in a vacuum.  They encounter information via their networks of trusted people–of family, of friends, of perceived experts who were recommended by people in that network.

The lies that are told by cult leaders and propagandists benefit someone.  Who benefits from the lies, even when they are told knowing they are lies?  Who suffers when the lies are told? Who isn’t harmed enough by the lies to work to change things?  Whiteness, white supremacy, is implicated in the lies being told now about voter fraud, the lies that led to anti-voter legislation in (for example) the state of Georgia and that are being advanced by legislatures in several more states across the country.  The stories of voter fraud are told because they are useful for the political agenda of people interested in suppressing votes, especially the votes of Black people in the US.

People are not rational.  This is a problem anthropologists have long had with economics, the extent to which that field treats people as “rational actors:” predictable, subject to particular laws of behavior, and responding identically to circumstances as and when they change.  People are not rational.

They are “relational”

Abeba Birhane defines and describes the need for relational ethics in AI, and I think that need applies well beyond AI:  

“At the heart of relational ethics is the need to ground key concepts such as ethics, justice, knowledge, bias, and fairness in context, history, and an engaging epistemology. Fundamental to this is the need to shift over from prioritizing rationality as of primary importance to the supremacy of relationality.”

Libraries have a long and troubled history with rationality at the core of its practices.  What I think that phrase, “Drank the Kool-aid” also demands in terms of what I’m thinking about rational approaches to information literacy, and what it can and can’t do, is the importance of relationships.  People and relationships are vital to how and why people move through the information landscape, which is always changing.  And the extent to which people have agency in any given information landscape is down to who they are, what kind of power they have, and the structures that surround them that constrain their ability to make decisions on their own behalf.

Tressie McMillan Cottom said at ACRL 2021 that we value particular information because we value the people from whom that information came.  I would extend that, or add to it, or put along sideways the point that we also tend to value the stories of the people who we value.  Whose stories we value informs the way that the Jonestown massacre is remembered, in the disconnected turn of phrase, “Drank the Kool-aid.”   Whose stories we value helps explain why throughout the Trump administration we kept hearing the stories of his voters, of “anxious whites” and why they voted the way they did, why the dapper white supremacist was a character in new stories not exclusively, but especially after 2016.  Whose stories we value informs the current (as of May 19 2021) coverage of the Israeli attacks on Palestinians in Gaza, current US policy on that occupation, and who we decide to listen to.

I appreciate this commentary on Twitter from Shea Swauger:  “to be clear, information literacy will NOT fix racism, sexism, xenophobia, transphobia, homophobia, classism, ableism, islamophobia, capitalism, colonialism, structural oppression, or white supremacy”

This definitely reflects my own thinking about information literacy.  

Attempting any classification of sources into “reliable” or “unreliable” cannot be the substitute for building relationships with people.  And this line of thinking is not new, the one that says we cannot rely on checklists to save us from misinformation and lies.  Kevin Seeber was writing about this in 2017, and this by Carrie Wade is from 2018.  

“Beyond the nouns and the verbs of “fact-checking” and “media literacy” and all of the advertisements and marketing materials we have at our disposal, what this discourse fails to acknowledge is the ways that knowledge is socially constructed. As libraries we cannot rely on better websites to solve political problems.”

Mike Caulfield, in his work around disinformation, suggests that we help students decide who deserves their time before “going down a rabbit hole” —the more time you give misinformation, the more it distracts you from constructive and productive work/life/play.  I think we should spend less time debunking and more time shunning.  Information is surrounded by and embedded in the relationships people have with each other, and their intent towards each other in sharing information.  Even people we do not know personally are in relationships with us, structurally.  It’s worth asking, if my relationship with that piece of information is via the white supremacist organization that shared it, what obligation do I have to break that information down, or can I disengage from that particular stream, because I recognize the toxicity of the organization sharing it?

I would point here to the work emerging from UNC’s Center for Information, Technology, and Public Life, in particular the critical disinformation studies syllabus, which offers us a way of structuring our approach to information that centers people, structures, and power.    

If you think you can be and should be useful to people in terms of helping them navigate information and misinformation, you need to be a person to them first.  So cultivating presence is central to that relationship building.

Relationship building is difficult in any circumstance, and it can be especially difficult when trying to engage entirely online when you are accustomed to doing it in physical spaces.  There are many ways of being human online, and very few of them involve lists of dos and don’t, and questionnaires about information provenance.  Being human, in a library context, is challenging when the library worker presence is predominantly in one-shot instruction sessions, or online tutorials about how to use the library web sites to get to “reliable” sources (I know you know this).  

Nicole Pagowsky notes that, in fact, doing one-shots can contribute to misunderstanding how to navigate information, and also misunderstand the library.  

“The way we engage in teaching within one-shot models, and the associated expectations for measurement, both keep us in a holding pattern of reactionary yes-people unable to enact our own agency within campus power structures.”

One-shots happen in the absence of a meaningful relationship between library workers and teaching faculty, in situations where valuing how many students you are “in contact” with, or who encounter the library in those sessions, is prioritized over embedding work and selves into the processes of education, and building relationships, trusted connections that students can then call on as they navigate their education.  Transactional library experiences, reducing information evaluation to a list of tasks, obscure the larger work that we should be responsible for.

Being extremely online can make it hard to literally see people in the internet (think about all the anxiety about cameras on/off in Zoom-based teaching)–it requires a new definition of “presence” that those privileged enough to get to be in rooms and buildings have assumed meant “in the same physical place”–engagement never was a guarantee.  Think about newspapers, naps, distracted gazing out of windows in class.  Lack of engagement is not new.  What it looks like might be.

Covid and the pandemic emergency has led to a massive and not entirely voluntary movement to online teaching and learning practices.  Those who were not already “extremely online” were confronted with the reality of digital as a place, not just a tool or a distraction or a repository for content.  

If digital is a place where we teach, and we hope that students learn, what kind of place are we in?  What is a classroom?  Is it Zoom?  Is it Moodle, or Canvas?  What is a learning space?  What is a library?

Libraries have been confronting “what is a library” for a while now–it has never been just a building, or a collection of databases, but also a network of people, a collection of expertise, a node for a college or university community to connect with in the course of doing, analyzing, and disseminating academic work.  

Some universities and colleges have had the luxury of not examining what a classroom or a lecture hall is in physical spaces.  Private universities in the US have had the particular privilege of making central to their student experiences the physical, the co-location of students and faculty and facilities in a way that assumes connection and engagement.  Oxford and Cambridge in the UK have a similar advantage, and make similar assumptions.

Large state institutions, and community colleges have not always had that luxury.  That doesn’t mean that their physical campuses were critically examined, but that they have had to be more online, or in other ways more attuned to the distanced needs of (for example) commuter students, or students who cannot, because of life circumstances (the needs of their families, the needs of their bodies) prioritize physical presence on campus as a part of their educational experience.  This is similar to the situation that individual people experience when they need to turn to online/distanced relationships to make up for what they cannot or are not experiencing in their own face to face/physical spaces:  queer kids growing up in politically conservative contexts; Black, brown, and Indigenous people teaching in predominantly white institutions.  To assume that it is impossible to build relationships in online only spaces is to be operating under assumptions generated in contexts where it’s easy to build relationships in physical spaces, because you are surrounded by people who recognize you as part of their community.

We have always needed to do the work of recognizing that co-location is not the same thing as engagement.

And it is a concern for engagement, for evidence of student participation that drives the market for edtech surveillance and learning analytics.  Educators and administrators are being sold the idea that if you count the clicks, if you track the eye movements, if you swipe in with cards at instructional library sessions, that you get a meaningful number that tells you something about engagement, about learning.

Think about what counts as Engagement on Facebook:  clicks and controversy.  

Karen Hao report ed in MIT Technology Review just this past March (2021)

“The algorithms that underpin Facebook’s business weren’t created to filter out what was false or inflammatory; they were designed to make people share and engage with as much content as possible by showing them things they were most likely to be outraged or titillated by.”

That is not the kind of engagement we are going for instructionally, but the number of clicks and time spent “on task” is the kind that Learning Management Systems collect.  Collecting and counting clicks is collecting proxy data for learning, much like checklists serve as problematic proxies for the work of information literacy.  It’s not effective, and not representative of the work we or our students need to do.

What is the work of the library for, and is it information literacy?

What is the work of the university for, and is it information literacy?

What about knowledge, its production, its navigation, its analysis?

This Spring I taught an ethnographic methods class, and we approached the topic with the lens of the work of Linda Tuhiwai Smith, in particular her Decolonizing Methodologies book.   She notes throughout the book the importance of relationships, and the ways that indigenous researchers are responsible to their own networks even before they begin researching.  Smith wants Indigenous researchers to ask the following questions about any given research project, and I think they are good practice for any researcher:

“Who defined the research problem

For whom is this study worthy and relevant?  Who says so?

What knowledge will the community gain from this study?

What knowledge will the researcher gain from this study?

What are some likely positive outcomes from this study?

What are some possible negative outcomes?

How can the negative outcomes be eliminated?

To whom is the researcher accountable?”

(2012, pp175-76)

I also had my students read Marisa Elena Duarte and Miranda Belarde-Lewis on library cataloging practices and Indigenous knowledge, and they have very similar things to say about the importance of building and maintaining relationships to indigenous scholars with each other, and to the people who they are hoping to learn from.  

I interviewed several of my anthropologist colleagues as guest lectures (in soundfile/podcast form) for this class, so that my students could hear the voices of anthropologists who are not me–it’s a practice I’ll continue even if I ever teach in physical classrooms again.  More than one of them made the point that to be able to do the work that you find important and interesting, you need to start with what is important to the people you hope to learn from.  You might end up realizing that the project you wanted to do isn’t the one you should do.  You might find that you are stuck organizing the broom closet for a month before anyone will have an unguarded conversation with you.  You do this work to build relationships, because your goal should not be extractive, for people to give you information, but for your work to have meaning to them, and for you to be humans to each other, not just potential transactions.

Smith, Duarte, and Belarde-Lewis all write about knowledges, the importance of knowing whose knowledge, and in relation to whose other knowledges.  Indigenous knowledge  and its production is historically erased or bounded within Western interpretation of that knowledge–those processes are social, and require social analysis (and a power analysis), not just fact checking.

In their respective works, Smith, and Duarte and Belarde-Lewis highlight the importance of relationships, of trust, of creating places where Indigenous people can connect with each other, with their own priorities, and produce knowledge by and for and of themselves, not just in relation to the knowledges and structures imposed on them by colonization and the controlling processes of them.  There is a lot that non-Indigenous people need to learn from Indigenous people and traditions, there’s a lot of listening we need to do, but today I want to point to this as one thing we need to pay much more attention to when we worry about things like “information literacy.”  

Sam Popovich notes that in LIS, the opposite of knowledge is defined as error, which then might theoretically be “fixed” with more information.  

“Library leadership view the opposite of knowledge to be error (correctable by more knowledge), and so ideology—knowledge in the service of power—is automatically excluded. By excluding the concept of ideology from any consideration of intellectual freedom, people can be wrong but they can never be collectively implicated in structures of false knowledge. The result is that intellectual freedom remains understood solely as an individual concern, and the role of libraries at most to correct error, but never to engage in the relationships between knowledge, false ideas, and power.”

Knowledges are created from many places, and generated in the context of information being produced by and passed on by people.  And our reactions to information, and navigation of various kinds of knowledge, are informed by our relationships to the people we associate with the information.  So in our current situation where misinformation is rampant and putting public health at risk, we need to sit with the likelihood that more information is not going to fix things.    

Several years ago I participated in the Visitors and Residents project, researching student info seeking behavior.  We interviewed first year students who often cited their parents, their friends, roommates, as people they talked to while doing their class papers.  It wasn’t until their second or third year that they started citing professors, and occasionally library workers.  Why?

Because those people were no longer strangers to them

My  current research during the pandemic involves interviewing students  as a part of a Jisc project in the UK, and what they are telling me they miss in the pandemic is interactivity.  They say they want to be on campus in lecture halls, so they can talk to their lecturers before and after class,  So that their professors can see their faces and maybe tell when they are confused and pause, or explain, or repeat themselves.  They want to be able to meet with classmates in the library, or in cafes, to talk, and connect, and “have fun” as a part of their going to university.  They talk about how hard it is to feel engaged online if all there is is content delivery/recorded lectures or uploaded articles in the course management system.  

And they clearly assume that the interactivity would be happening more in physical spaces, because they have experienced how hard it is for interactivity to be programmed into university experiences that still prioritize content delivery in digital contexts.  This is not to say such interactivity is impossible (think about online gaming, messageboards, dating sites–online interactivity is everywhere!)–just that universities are clearly experiencing barriers to providing it.  One of those might be their failure to fund full-time expertise in online environments.  

It’s possible to do this work, of building connections online–one example of people with expertise trying to help can be found in the work of  Mia Zamora, Maha Bali, and Autumm Caines at Equity Unbound.  

Attendance alone has never been evidence that your students were learning.  It was all the other things that happen in classrooms, and out of classrooms.  It was always stuff you couldn’t see.  We have never been able to bear witness to all of the processes that contribute to students’ learning.  So, why should we try now?

We are coping, poorly, with what is out of our control (the pandemic, our labor situation, our students’ attention) with the idea that we could control and capture some of what is going on, via surveillance and analytics.

But:  control is not care.

Control is not teaching.

And active learning and teaching practice shows us that it is in the letting go of our control that we can effectively curate environments for learning that are generative, just, and caring.

So we need to not mistake student engagement with systems (like presence in the LMS, or library catalogs) for student engagement with processes.  And need to think about what we could be offering students instead of what Jeffrey Moro has called “cop shit:”

“Like any product, cop shit claims to solve a problem. We might express that problem like this: the work of managing a classroom, at all its levels, is increasingly complex and fraught, full of poorly defined standards, distractions to our students’ attentions, and new opportunities for grift. Cop shit, so cop shit argues, solves these problems by bringing order to the classroom. Cop shit defines parameters. Cop shit ensures compliance. Cop shit gives students and teachers alike instant feedback in the form of legible metrics.”

You need to be human to students and colleagues, and they need to be human to you.  That means no dehumanizing practices in already challenging spaces.  No proctoring, No AI, no predictive analytics, no “engagement metrics.”  These numbers and metrics give the illusion of knowledge.

The work of our classrooms, and our libraries, digital and otherwise, needs to be at least as much around relationship building as it is around information wrangling.  And in building those relationships we can move towards collaborative models of scholarship and teaching, where no one person is the Star of the Show, but where we as a team can provide the kind of environments our students need, and that we need too, for critical and effective scholarly practices.

This is not a “silver lining” but work we have always needed to do.  The responsibility for us as instructors and educators is to have and gather information about these systems on behalf of our students, so that we might refuse on their behalf.  We cannot expect students to do all of the work of protecting themselves from unnecessary quantification and surveillance, from their position.  Where we have power, we need to use it for them.  And for our precarious and adjunct colleagues who do not have access to the power to refuse.

I’d point to this example of refusal from Dearborn, MI as inspiration for what is possible.  

Think about:  are you who work in the library embedded in relationships across campus that make you part of the trusted network of students and faculty?  

Which people are used to and comfortable with the things we needed to change, and who want to “go back” to that place of comfort for them?  Whose comfort is determinative in our choices going forward?  Whose discomfort doesn’t matter?  Which student voices are heard, when talking about whether we “should” be back on campus, or even what “on campus” means?  Which people did we always need to listen to more?

The motivation of information literacy work cannot be “the value of libraries”–one-shots are not a measure of importance, and might in fact be the opposite.  The motivations need to be the needs of your community and especially the most vulnerable.  We need to care for our communities more than we care about the library.

An institutional agenda that is built on social justice and Black feminist ethics of care requires paying attention to the impact of misinformation on people’s lives, not both-sidesing things via debate or “neutral” free speech platforming.

Because who gets to speak is historically about who has the power to be speaking.  And we need to start reframing our attention around who should be heard.   Rodrigo Ochigame writes of liberation theology in LIS and notes:  

“The remarkable innovation of the Brazilian liberation theologians is that they moved beyond a narrow focus on free speech and toward a politics of audibility. The theologians understood that the problem is not just whether one is free to speak, but whose voices one can hear and which listeners one’s voice can reach. “

One of the points I am trying to make is that even when we get to be back in physical spaces together, we need to continue to do the work of building and maintaining relationships, and recognizing and engaging with knowledges, not just information.  And we need to listen to vulnerable people, we need to listen to the people for whom the systems in which we operate were not originally built.  We always needed to be listening to disabled people when they told us what they needed.  We historically have not, or have done the minimum to be ADA compliant.   We always needed to listen to Black women, when they told us what white supremacy was doing to students, to communities, to our entire country.  We historically have not, because misogynoir is a powerful force.  We have always needed to listen to the “firsts” at universities, and not approach them as the ones with the deficit to be remedied.  We should hear them as the ones who can tell us what universities should be doing to support students, but don’t, because universities are built for students privileged enough to be OK without that institutional support.

So many people have died since March 2020.  In the US alone we have lost more than half a million people, and so many of those people should still be here, these deaths were largely preventable.  And before the pandemic people were dying, especially Black people, from police violence, and medical malpractice, and the impact of racism on their health, and ability to move freely through the world.  We cannot value Black people, brown people, or Indigenous people only once their story has ended, or when it contains trauma.  

The information we value, and the knowledges we recognize, are generated by people we value, and also about people we value. 

This is work, even with all the chaos around us, that we always should have been doing.  Whatever else has changed, and will change, that work and the need for it will not.

So what are we going to do?

Sources:

Birhane, Abeba. (2021) “Algorithmic injustice: a relational ethics approach” Patterns, 2(2)

https://doi.org/10.1016/j.patter.2021.100205

Caines, Autumm (2021) “The Weaponization of Care”  Real Life.   May 24.

https://reallifemag.com/the-weaponization-of-care/

Caulfield, Mike (2019)  “SIFT:  The Four Moves”  (Blog post).  June 19.

Critical Disinformation Studies (syllabus) 

https://citap.unc.edu/research/critical-disinfo/

Duarte, Marisa Elena & Miranda Belarde-Lewis (2015) Imagining: Creating Spaces for Indigenous Ontologies, Cataloging & Classification Quarterly, 53:5-6, 677-702 

http://dx.doi.org/10.1080/01639374.2015.1018396

Ellenwood, David (2020)  “  Information has Value:  The Political Economy of Information Capitalism”  In the Library with the Lead Pipe, Aug 19.

http://www.inthelibrarywiththeleadpipe.org/2020/information-has-value-the-political-economy-of-information-capitalism/

Hao, Karen (2021)   “How Facebook got addicted to spreading misinformation“  MIT Technology Review, March 11.

https://www.technologyreview.com/2021/03/11/1020600/facebook-responsible-ai-misinformation/

Kreiss, Daniel and Shannon McGregor “Polarization Isn’t America’s Biggest Problem—or Facebook’s” Wired, April 5, 2021 https://www.wired.com/story/polarization-isnt-americas-biggest-problem-or-facebooks/

Moro, Jeffrey (2020) “Against Cop Shit”  (blog post) 13 February.  https://jeffreymoro.com/blog/2020-02-13-against-cop-shit/ 

Ochigame, Rodrigo (2020)  “Informatics of the Oppressed”  Logic, Issue 11 (Care), August. 

https://logicmag.io/care/informatics-of-the-oppressed/

Pagowsky, N. (2021). The Contested One-Shot: Deconstructing Power Structures to Imagine New Futures. College & Research Libraries, 82(3), 300. doi:https://doi.org/10.5860/crl.82.3.300 

https://crl.acrl.org/index.php/crl/article/view/24912

Popovich, Sam (2021)  “Canadian Librarianship and the Politics of Recognition”  Partnership 16(1)

https://journal.lib.uoguelph.ca/index.php/perj/article/view/6126/6036

Sarah, and  Autumm Caines, Christopher Casey, Belen Garcia de Hurtado, Jessica Riviere, Alfonso Sintjago, Carla Vecchiola (2021)  “What Happens When You Close the Door on Remote Proctoring? Moving Toward Authentic Assessments with a People-Centered Approach”  Volume 39, Issue 3: Educational Development in the Time of Crises, Spring. 

https://doi.org/10.3998/tia.17063888.0039.308

Scheeres, Julia (2014) “Escape from Jonestown”  Longreads.  November 12.  

Seeber, Kevin (2018) “Teaching CRAAP to Robots:  Artificial Intelligence, False Binaries, and Implications for Information Literacy”  Critical Librarianship and Pedagogy Symposium, University of Arizona, November. 

https://kevinseeber.com/claps2018.pdf

(2017)  “Wiretaps and CRAAP”  (Blog post)  March 18.

https://kevinseeber.com/blog/wiretaps-and-craap/

Smith, Linda Tuhiwai   (2012)  Decolonizing Methodologies, second edition.  Zed Books:  London and New York.

Wade, Carrie (2020) “No Answers, Only Questions: The false fight against fake news”  (Blog post 576)  Sept 22.

https://seadoubleyew.com/576/no-answers-only-questions-the-false-fight-against-fake-news/

(2018) “I am not your Fake News Savior” (Blog post 150) March 8.

https://seadoubleyew.com/150/i-am-not-your-fake-news-savior/


Gaining insight over fixing problems: how open ended research can teach us what we need to know

November sunset in Guelph

I was so pleased to be invited to the University of Guelph library by Karen Nicholson and Ali Versluis to give a talk and also to talk with people in the library about user experience and ethnographic research in library and education contexts. This was the last talk that I gave during my November Tour, and I think it came together the most solidly of the four (there’s something to be said for the repetition of experiences in getting things right, note to self). I would also like to thank Chris Gilliard for reading early drafts of this, and helping me clarify some of my argument. Thanks to Jason Davies for the Mary Douglas citation. And credit as well to Andrew Asher, who was my research partner in some of the work I talk about here.

I wrote this talk at my home, in what is now called North Carolina, in the settler-occupied land of the Catawba and Cherokee people.   I am a Cajun woman, and my people are a settler people from the Bayou Teche, on Chitimacha land in what is now called Louisiana.  

I want to  acknowledge here the Attawandaron people on whose traditional territory the University of Guelph stands and offer my respect to the neighboring Anishinaabe, Haudenosaunee and Métis.  

************************************

A few years ago, Andrew Asher and I were hired to do a project for an international non-profit that provides electronic resources to libraries in less well resourced countries.  The organization was aware that there were low use and high use institutions that they were providing resources for, and wanted to know why that difference was there.

So we interviewed people in Zambia, and in Kyrgyzstan, in places that this organization told us didn’t have connectivity issues.  While there might not have been connectivity issues on the university campuses, the practical experience of connectivity was not consistent, as people were not always on campus.  As researchers, we encountered this as a problem early on, for example not being able to use Skype for interviews because of connectivity problems. We ended up doing a mix of Skype to call mobile phones, and WhatsApp to conduct interviews in locations where the internet was not reliable for our participants. 

Among the things we found out, in the course of our research, was things like in Zambia,  people who wanted to have faster internet bought ISP “rabbits,” to gain access off campus. We interviewed a  PhD candidate in Engineering who made the point that unless you were on the university network (Eduroam), you could not use university materials (such as library resources).  Therefore, using the faster, more reliable (but more expensive) rabbit modems in Zambia locked students and staff out of their institutional resources.  

We interviewed a Lecturer in Education with similar issues, even though he was at a “high-use” institution.  It wasn’t that the subscriptions weren’t there, or the resources not theoretically available, but that connectivity made those resources less useful, as they were difficult to get to:

“Yes, like I was telling you, either you subscribe to some journal publisher and because of poor connectivity, you may not get access to those services.  So it’s basically attributed to poor connectivity. Not that the institution does not have the information, the information could be there but the connectivity limits us from getting access.  Cause the system gets to be slow.”

This scholar did point out that doesn’t happen too frequently, so he wasn’t going to complain too much about access.  But he highlighted what’s at stake when those failures happen: he can’t do his work.  

“Basically, I can just say that is it poor connectivity and when there’s poor connectivity and there’s something that I urgently need to confirm because like when I’m reading a journal article where somebody has cited somebody.  There are times when I actually need to read the other article or if it’s a book which they refer to so I’ll probably have to go online to download and if there is not connectivity then that becomes a problem.”

Our research revealed that use of resources (or lack thereof) wasn’t just about connectivity, it was also about culture, and the separation that scholars experienced from the people working in the library.  One librarian we spoke to made it clear that the levels of authentication that scholars found burdensome were there on purpose to make sure that only the right people could have access to them. That, however, translated to even the “right people” using those resources less, or not at all, preferring to spend their precious internet time on getting to resources that were more easily accessible, even if not institutionally provided.

In Kyrgyzstan, one scholar assumed that because the physical collection in the library was out of date and inadequate, the electronic resources would be, too.

So, scholars in these two countries, in both “high” and “low” use institutions according to the non-profit, acquired and shared resources via printing, email, and thumb drives more often (and more reliably) than getting resources online via the resources paid for and provided by the organization.  

The implications we drew out were as follows:

  • Providing materials “online” is not the same as providing “access” when the internet is not a sure thing.  Also, having a connection is not the same thing as being connected enough to make using online resources a feasible option. There are many barriers to accessing library materials that are outside of the library’s own systems and infrastructure.  
  • Scholars find what they need, and what is accessible–if they Google something and it’s closed-access, they move on until they find something they can use.  The existence of the materials does not necessarily translate into its use.  
  • The disconnect of the library from the research workflow of the scholars interviewed here was striking, especially in the context of their awareness for the need for training, and knowledge about how to better navigate useful resources.  For example, one Lecturer in Education was at her current institution for 4 years before she knew about electronic resources, and then it wasn’t until she had started her PhD studies at another institution.

And our recommendations were things like:  pay attention to physical infrastructure when you offer online resources to institutions.  Consider offering resources in digital forms that aren’t just online. Think about facilitating more networking and connections between the people in the library and their surrounding community of scholars.  Basically, we told them context matters, and that the non-profit, in providing online resources, was operating as if they were in a vacuum.

Our report had to do with infrastructure, economics, and the lives of the scholars (faculty and students)–The non-profit wanted a problem to fix, and in many ways that was reasonable–it cost money for them to provide these resources, and wanted to avoid wasting resources.  What we as researchers presented them with was an exploration of the contexts in which the people they were trying to help (via libraries) were restricted in what was or wasn’t possible.  

We did not provide them with a quick-fix solution.  In many ways, the questions they wanted to ask were inevitably going to have disappointing answers.  

And well, the qualitative work we did wasn’t satisfying, short-term, but I think it’s important nonetheless.

Why was our research unsatisfying? Well, to some extent, the reason is the culture of libraries.  

I will point again to the article “Ethnographish” that Andrew and I wrote.   We wrote it in a moment, several years into our collective work as anthropologists working in libraries, where we wanted to try to think critically about why the work we were doing looked the way it did.  And also why particular kinds of work (especially open-ended exploratory ethnography) was so hard for us to do.  

Our argument is:  open-ended exploratory research is a hard sell in libraries.  We see UX research not just because it’s useful, but because it’s finite, and in particular because it’s proposing to solve specific problems.

“Libraries are notoriously risk averse. This default conservative approach is made worse by anxiety and defensiveness around the role of libraries and pressures to demonstrate value. Within this larger context, where the value of libraries is already under question, open-ended, exploratory ethnographic work can feel risky.“ (Lanclos and Asher 2016)

I think that in positioning themselves as problem-solvers, libraries and library workers are positioning themselves in a tactical way.  DeCerteau’s distinction here between kinds of agency (tactical vs. strategy) is useful here, helping us think about the kinds of actors who are allowed choices given their structural position.  To what extend to libraries and library workers get to make decisions that aren’t just tactical, not just reactions to situations? How and when do libraries and library workers get to make strategic decisions?  Because that has to be more than just responding to demands and solving problems.  

A while ago I gave a talk at a CUNY event that advocated for the mixed-methods library.  Lots of assessment departments talk about (and some do) both qualitative and quantitative (though I still stand by my impression that a lot of qualitative stuff is UX-style “what is the problem” approaches.).  I gave that talk in 2014, and at the time, part of what I was pointing to was the need to get insights that numbers would not give us.  

For example, I worked with a university that participated in the Measuring Information Service Outcomes survey.  Some of the bar charts we can generate from this data look like this:

 We have all of these numbers, what do they mean?  What does “satisfied with the library” mean, anyway?  Can graphs like these tell us anything?  

In that talk 2014 I actually said “I don’t[ want to get rid of quantitative measures in libraries” but now in 2019 (and actually, way earlier than that) I decided it wasn’t my job to advocate for quantitative anything, and not just because lots of other people are already advocating for that.

Because now in 2019,  quantification and problem fixing orientations have landed us with learning analytics, and library analytics, and I think there’s a lot more at stake than “these bar charts don’t tell us enough” (which was bad enough).  We have arrived here in part because somewhere along the way arguments accompanied by numbers were interpreted as Most Persuasive (I think we get to thank Economists, as a discipline, for this, given their infiltration into popular news media as commentators).  

Being able to categorize people also feels like a constructive action, a first step towards knowing how to “help” people (and categories are certainly central to particular practices in librarianship, and yeah they come with their own troubled history, as anyone who’s read critical work on LOC or Dewey systems will attest).  

So let’s think about the impact of categorizing and quantifying academic work, including the work of libraries.  Let’s think about what we are doing when we put people into categories, and then make decisions about capability based on that.  And yeah. Pop culture quizzes, and even sometimes those management personality tests can be fun.

Where it all ceases to be fun is when decisions get made on your behalf based on the results.

Frameworks and quizzes and diagnostics (what I like to call the “Cosmo Quiz” school of professional development) are often deployed with the result that people decide what “type” they are to explain why they are doing things.  Pointing to individual “types” and motivations provides an easy end-run around organizational, structural, cultural circumstances that might also be the reasons for practice. Because then when there are problems, it is up to the individual to “fix it”

What are we doing when we encourage people to diagnose themselves, categorize themselves with these tools?  The underlying message is that they are a problem needing to be fixed (fixes to be determined after the results of the questionnaire are in)

The message is that who they are determines how capable they are.  The message is that there might be limits on their capabilities, based on who they are

The message is that we need to spend labor determining who people are before we offer them help.  Such messages work to limit and contain people, rather than making it easy for people to access the resources they need, and allow themselves to define themselves, for their identity to emerge from their practice, from their own definitions of self.

When UX workers use personas (another way of categorizing people) to frame our testing of websites, we have capitulated to a system that is already disassociated from people, and all their human complexity.  The insidious effect of persona-based arguments is to further limit what we think people are likely to do as particular categories. Are first year students going to do research? Do undergraduates need to know about interlibrary lending?  Do members of academic staff need to know how to contact a librarians?  Why or why not? If we had task-based organizing structures in our websites, it wouldn’t matter who was using them.  It would matter far more what they are trying to do.  

I am informed in this part of my argument by  anthropologist Mary Douglas on How Institutions Think, and in particular that institutions are socially and culturally constructed, and that they themselves structure knowledge and identity.  Douglas’ work allows us to think of personas and other kinds of personality test-categories as “patterns of authority”, not just ways of trying to make things clear, but as ways of reifying current structural inequalities, and categories that limit people and their potential. When institutions do the classifying the resulting patterns are authoritative ones, the profiles that suggest plans of action come at the expense of  individual agency, and implies that the institutional take on identity is the definitive one that determines future “success.” 

What are the connotations of the word “profile?”  If you have a “profile” that is something that suggests that people know who you are and are predicting your behavior.  We “profile” criminals. We “profile” suspects. People are unjustly “profiled” at border crossings because of the color of their skin, their accent, their dress. 

“Profiles” are the bread and butter of what Chris Gillard has called “digital redlining:” ”a set of education policies, investment decisions, and IT practices that actively create and maintain class boundaries through strictures that discriminate against specific groups.“  His work is at “the intersections of algorithmic filtering, broadband access, privacy, and surveillance, and how choices made at these intersections often combine to wall off information and limit opportunities for students.”

“Now, the task is to recognize how digital redlining is integrated into technologies, and especially education technologies, to produce the same kinds of discriminatory results. (Gilliard and Culik 2016) “

Chris gave in his recent Educause talk some examples of what he calls “EdTech WTF moments”

  • “Facemetrics tracks kids’ tablet use. Through the camera, patented technologies follow the kids’ eyes and determine if the child is reading, how carefully they are reading, and if they are tired. “You missed some paragraphs,” the application might suggest.
  • In a promotional video from BrainCo, Students sit at desks wearing electronic headbands that report EEG data back to a teacher’s dashboard, and that information purports to measure students’ attention levels. The video’s narrator explains: “School administrators can use big data analysis to determine when students are better able to concentrate (Gilliard 2019).”

One problem is that it’s possible to extract quantified behavioural data from systems, in a context (e.g., libraries) where quantified data is perceived as most persuasive 

What gets lost in quantification is not just the Why and How (quantification is really good with the What, and occasionally Where), but also the privacy, safety, and dignity of the people whose data you are extracting.  This is a “just because you can doesn’t mean you should” situation, especially when we consider our responsibility to people who are already over-surveilled, hypervisible, and structurally vulnerable (i.e., Black, brown, and Indigenous people)

Let’s look at this Guardian article, on student surveillance, and here I’m guided again by Chris Gilliard’s deep dive on this article

https://twitter.com/hypervisible/status/1186625050184732672

Basically, companies like Bark and Gaggle are using school worries about liability around school shootings and student suicides and bullying as a lever by which they gain access to the schools.  They sell “security” when what they are actually peddling is “surveillance.”  

In this article none of the concerned parties are talking about gun control, or human systems of care that can deal with mental health issues, address discrimination against LGBTQ+ kids, racial bias, and so on.  The companies are selling results that are not borne out by the research they hand wave towards. They are counting on people being too scared not to engage with these systems, because they feel helpless

(sound familiar?)

Read the damn thing yourself too, it’s terrifying to me: https://www.theguardian.com/world/2019/oct/22/school-student-surveillance-bark-gaggle

And of course It gets worse–as I was writing this talk a bill was introduced by US Republican senators to make school engagement with this tech (and these tech companies) MANDATORY.

Thanks to Chris Gilliard and his work, I am also aware of Simone Browne’s work Dark Matters: on the Surveillance of Blackness.  In this book, she writes a black feminist, critical race studies informed take on surveillance studies.  She points particularly to the history of surveillance technology as being one that emerges from the white supremacist  need to police black people, black bodies. Her examples include enslavement trading practices of the 1800s, the tracking and control of enslaved people via paper permits and laws about carrying lanterns after dark, and she makes it clear that this history is relevant to current discussions of how we make people visible, in what circumstances, and why.  We cannot disentangle race and inequality from our discussions of these technologies, nor should we try to in a quest for “neutrality” or “objectivity.”

The surveilling gaze is institutionally white, and furthermore, as Browne demonstrates in her book, that the technologies and practices of surveillance have a deep history in the colonization and enslavement of black and indigenous people.  The history of current surveillance practices involves the production and policing of racialized categories of people, in particular blackness and black people, so that they can be controlled and exploited.  

We need to think too about the racist context in which data is generated and collected, as in the case with health care data used to generate algorithms intended to guide health care decisions.   In Ruha Benjamin’s perspective piece in that same issue of Science, she notes that researchers “found that because the tool was designed to predict the cost of care as a proxy for health needs, Black patients with the same risk score as White patients tend to be much sicker, because providers spend much less on their care overall. “

While surveillance and tracking are clearly forms of control, and the use of algorithms is a problem, their use is often framed as care (again, see the people interviewed and quoted in the Guardian article, and this is an argument I hear in library contexts too, “we need the data to care for students and faculty.”)

Insisting that people have to participate in systems that harvest their data to have access to education or health care is a kind of predatory inclusion.  

“Predatory inclusion refers to a process whereby members of a marginalized group are provided with access to a good, service, or opportunity from which they have historically been excluded but under conditions that jeopardize the benefits of access. Indeed, processes of predatory inclusion are often presented as providing marginalized individuals with opportunities for social and economic progress. In the long term, however, predatory inclusion reproduces inequality and insecurity for some while allowing already dominant social actors to derive significant profits (Seamster 2017).”

When people become aware that they are under surveillance, there can be a ”chilling effect” where they do not engage with the system at all.  This is refusal, not engaging with the system because of wariness of what might happen if they do.  We need to consider carefully the disparate effect some of these methods of surveillance may have on trans students, undocumented students, and other vulnerable populations.  

Our role as educators, as workers within education, should be to remove barriers for our students and faculty (and ourselves), not give them more.

We also need to think critically about whether the systems we are extracting data from accurately reflect the behaviors we are interested in.  For example, borrowing histories, swipe card activity records, and attendance tracking are all proxies for behaviors, not direct observations, and not necessarily accurate representations of behaviors (even as they might seem precise, and make us feel good about our precision biases).

And if you are worried about “How will we know…X” please do not assume that these systems are the only way.  Because the vendors selling these systems that collect this problematic data want you to THINK that it’s the best and only way to find things out.  But that is not true.  

The fight against quantification, pigeonholing, surveillance and tracking should include qualitative research engagement –like the stuff that I do, like the stuff I try to write about and train people to do, and encourage them to try–engagement with the people from whom we want to learn, and with whom we want to work.  I would even suggest that the lack of “scalability” of qualitative methods is a benefit, if what we want is to be able to push back against surveillance and automated systems.

It’s about more than being able to be strategic on behalf of libraries and library workers, but also being able to create space for students and faculty to be strategic, to exercise power and agency in a context that increasingly wants to remove that, and put people at the mercy of algorithms.  This is particularly dangerous for already vulnerable people–Black and brown, Indigenous, women, LGBTQ+ people. Exploratory ethnographic approaches, engaging with people as people (not as data points) gives us not just more access to the whys and hows of what they are doing, but can work to connect us with them, to build relationships, so that we don’t have to wonder for long “why are they doing that.”  Then we won’t have to listen to people who rely on machines and their broken proxies for human behavior and motivations.  

Further Reading and Resources

LIBRARY TRENDS, Vol. 68, No. 1, 2019 (“Learning Analytics and the Academic Library: Critical Questions about Real and Possible Futures,” edited by Kyle M. L. Jones), © 2019 The Board of Trustees, University of Illinois

Benjamin, Ruha, “Assessing risk, automating racism,”  Science 25 Oct 2019: Vol. 366, Issue 6464, pp. 421-422.  DOI: 10.1126/science.aaz3873 https://science.sciencemag.org/content/366/6464/421.full

Browne, Simone. Dark matters: On the surveillance of blackness. Duke University Press, 2015.

de Certeau, Michel, and Steven Rendall. The Practice of Everyday Life. University of California Press, 2011.

Douglas, Mary. How institutions think. Syracuse University Press, 1986.

Gilliard, Chris “Digital Redlining”  featured session, EDUCAUSE conference, Chicago, October 16, 2019.  https://events.educause.edu/annual-conference/2019/agenda/digital-redlining

Gilliard, Chris and Hugh Culik “Digital Redlining, Access and Privacy”  Privacy Blog, Common Sense Education, May 24, 2016, https://www.commonsense.org/education/privacy/blog/digital-redlining-access-privacy 

Lanclos, Donna, and Andrew D. Asher. “‘Ethnographish’: The State of the Ethnography in Libraries.” Weave: Journal of Library User Experience 1.5 (2016).  https://quod.lib.umich.edu/w/weave/12535642.0001.503?view=text;rgn=main

Obermeyer, Ziad, and Sendhil Mullainathan. “Dissecting Racial Bias in an Algorithm that Guides Health Decisions for 70 Million People.” Proceedings of the Conference on Fairness, Accountability, and Transparency. ACM, 2019. https://science.sciencemag.org/content/366/6464/447 

Safiya Umoja Noble. Algorithms of Oppression: How search engines reinforce racism. NYU Press, 2018.

Seamster, Louise, and Raphaël Charron-Chénier. “Predatory inclusion and education debt: Rethinking the racial wealth gap.” Social Currents 4.3 (2017): 199-207. https://journals.sagepub.com/doi/abs/10.1177/2329496516686620?journalCode=scua

Watters, Audrey. (2014) “Ed-tech’s Monsters”  Hack education, Sept 3, http://hackeducation.com/2014/09/03/monsters-altc2014

Listening to Refusal: Opening Keynote for #APTconf 2019

Me delivering this talk , thanks to notes printed out at the last minute by Steve Rowett (thank you, Steve!) (photo by Lawrie Phipps)

On July 1st I had the great pleasure of delivering the opening keynote address to the APT Conference.  Before I try to represent my talk here, I need to thank the conference team, and especially Jason Davies, who contacted me last year to see if I would be interested in speaking at the event.  And I was, and I did, and I was glad to be there. When I got up to give this talk, I thanked the people in the room, and said “I hope I make you very uncomfortable.” I suppose the conference feedback will indicate whether or not I was successful.  (by the way, the slides and speaking notes for this talk are here. )

 In April 2019, right about the same time that I was thinking about what I wanted to say at APT, a report from the UK Department of Education came out, titled “Realising the Potential of Technology in Education:  a strategy for education providers and the technology industry”. 

This government document is to set the vision for the use of technology in education (specifically in England, but with implications for the rest of the UK).  So I wondered at its approach, but did not do so for long, as its emphasis was clear from the table of contents.  

This report centers the needs and desires of the tech industry.  It trades in deficit models, starts from the assumption that there’s not enough technology in educational contexts, and that more tech is the answer to “drive change”  

Words with the root “innov” (innovate, innovation, innovating, innovative) show up 43 times in this 48 page document.  Section 6 in particular gives the game away, with quite detailed concerns about the health and well-being of the edtech business sector in England, and the need for the industry to have streamlined access to education and educators. 

 The word “procurement” shows up 13 times, but “pedagogy” is nowhere in this report.  

The DfE report came out just after Lawrie Phipps and I had presented on findings from work we had carried out in 2018-19, on the teaching practices of lecturers in HE and FE.  We released this report at Jisc’s Digifest in March, the same month that our article on this same work was published in the Irish Journal of Technology Enhanced Learning.  I’ve discussed the broad outlines of this research elsewhere in the blog (and if you like you can watch our presentation on our approach and methods here)–for the purposes of this talk, I wanted to focus on the way we framed the work, and contrast it to the DfE report, because the research that Lawrie and I did seems to me the antithesis of that government document.  While that report started with technology, and assumed that there wasn’t enough of it, Our assumptions were as follows:

  • People who teach have practices that involve digital.  
  • People have expertise, and make reasoned decisions around what to do and not do. 

In our approach to our project we did not start off asking about technology (even though our research questions definitely were about technology in teaching and learning contexts).  We started off asking about teaching.

Among the themes that emerged in our interviewees’ discussions about technology were the barriers and enablers to the uses of of that tech.  Nowhere in these barriers were “lack of access to education technology markets.” There were plenty of barriers that were human, and organizational.  Time, priorities, values, relationships, and trust (or lack thereof) all informed the extent to which people did or did not engage with technology, both institutionally provided, and otherwise.  

It was also made clear over the course of our research that there were things being done with technology that  were not particularly “innovative” (e.g., lectures, grading, depositing materials for consumption). During our analysis, when thinking about barriers to technology use and in particular to “innovation” we found that practitioners were struggling with the disconnect between what they need to do in the spaces their institution provides, and what is possible–before they ever get to what they want to do, or what they might not know about yet.  

In institutional contexts where people do not have the time, organizational support, or access to resources that would allow for exploration around new tech, or using old tech in new ways, it’s not hard to see why “innovation” is hard to come by.  And also easy to see that “more tech” or “use the tech more” or even “create a market more friendly to vendors” isn’t going to produce more innovation. Or, more effective teaching and learning contexts.

We have encountered, over the course of this research and also in the other work we do in the sector, a distinct lack of compliance around certain kinds of education technology.

For example:

Lecture Capture

We witnessed and heard about a lack of participation in lecture capture, in people not wanting to do it, citing concerns about labor exploitation and picket-line crossing, and even expressing fears of the wholesale replacement of lecturers with captured content.

VLE/LMS

We spoke to and also heard about academic staff who keep a minimal presence in the learning management system (course content, syllabi, calendars), but who engage in their actual teaching practices in digital contexts outside of institutional control.

Card Swipes

For this example, I told the story (shared with her permission) of a student who studied abroad as a part of her degree.  This experience led to a full time job before she had finished her time at university, and that job also made it financially possible for her to complete her university degree.  In her final year there was a conflict between (required) attendance in class and the times she needed to be on site at work. Her department had recently instituted card-swipes to track student attendance in class.  She worked with her head of department to get permission to not always be in class, and with that permission was “swiped in” by a classmate to satisfy institutional requirements.  

I have told elsewhere the story of students engaging in an elaborate ID card charade to get a non-student into the library space they wanted to study together in–in the end, four students went into the library, and the ID system only recorded three of their own students, not the fourth unaffiliated one.

An inordinate managerial focus on Compliance makes it hard to see actual practices.  The examples I list above show us that if we mistake what is reflected in the VLE/LMS, card-swipe systems, and only the lectures that are recorded for the holistic reality of teaching and learning practices, we are terribly wrong.  

Our “precision bias” means that the numbers given to us via card swipes and attendance records feel far more accurate than they actually are.  Knowing the behaviors that give us these numbers means we cannot trust them as proxies for what we want them to be. Attendance numbers don’t actually tell us much about the engagement of students with their courses of study.  Course content placed in institutional online places doesn’t necessarily reflect actual teaching practices. Card swipes in libraries that don’t represent who is actually in the building at any given time.  

One overarching message in these stories, and in the research project overall, was that lack of trust can be corrosive.  Not being able to trust your institution with your actual practices means that you don’t share, and they don’t know, what you are doing.

I gave a brief presentation earlier this year about our research findings around non-classroom digital spaces and practices.  After talking about the ways that instructors engaged with students in non-classroom non-LMS/VLE digital places, the main question I was asked was “How can we make them use the LMS?”

Too often the institutional response is concerned with compliance, and furthermore assumes that if people are not complying, perhaps it’s because they don’t know how to do the “thing.”  So then we end up with lots of workshops and webinars about How To X. How to embed your gradebook into Canvas. How to upload captured lectures into Moodle. How to take attendance using clickers or card swipes.  

I have been reading Dr. Simone Browne’s Dark Matters:  on the Surveillance of Blackness.  In this book, she writes a black feminist, critical race studies informed take on surveillance studies.  I was familiar with surveillance (being closely observed, especially by an institutional power such as police or military, but increasingly by corporations, and any entity with access to the stream of data we leave in our wake these days), but unfamiliar with Steve Mann’s concept of sousveillance, which he describes as a way of “enhancing the ability of people to access and collect data about their surveillance and to neutralize surveillance (61)”

So, an example of surveillance tech would  be CCTV. An example of sousveillance would be using cameras in your smart phones to film the police during a protest.  

Dr. Simone Browne introduced me to the idea of dark sousveillance:  a way to situate the tactics employed to render one’s self out of sight (Dark Matters p. 21 in the Kindle Edition.)  In particular she is theorizing and describing the means by which racialized people avoid being seen, so that they cannot be victimized by the structures and practices of surveillance.  An example of such behavior would be publicizing where the cameras are, so that you can avoid them.

Central to the idea of dark sousveillance is the fact that the surveilling gaze is institutionally White, and furthermore, as Browne demonstrates in her book, that the technologies and practices of surveillance have a deep history in the colonization and enslavement of Black and indigenous people.  The history of current surveillance practices involves the production and policing of racialized categories of people, in particular blackness and black people, so that they can be controlled and exploited.  

Dark sousveillance is a refusal of the power structures of surveillance.  I am helped in making this connection with the work of Lilian G. Mengesha and Lakshmi Padmanabhan, who define refusal as “what we conceive of as disruptions to the vicious dialectic of assimilation and resistance” 

So in thinking again about surveillance, we can see that  assimilation would be having an Alexa in your house. Resistance would be hacking Alexa to observe only when you want it to.  Refusal is not having any such device in your house at all.  

The options of assimilation vs. opposition are still in reference to a given system, such as systems of gender relations, racial identity, and economic class.  Think of the revels of Mardi Gras, that serve to strengthen the message that you should observe Lent. The presence of The Fool requires that of a Monarch. There are fundamental assumptions and premises, hegemonies that are shot through these systems.  

Refusal is not participating in those systems, not accepting the authority of their underlying premises.  Refusal happens among people who don’t have access to structural power. Refusal is a rejection of framing premises.  Recognizing refusal requires attention, and credit to tactics such as obfuscation, or deliberate misinterpretation.  

“The tactics of refusal include:  illegibility, opacity, and inaction” (Mengesha and L. Padmanabhan 2019)

In making this argument about refusal, I want to point to some examples of what I mean.

Ethnographic refusal has been defined by Dr. Audra Simpson (an anthropologist and member of the Kahnawake Mohawk–Haudenosaunee people)  as “a willful distancing from state-driven forms of recognition and sociability” (2014)  (cited in L. G. Mengesha and L. Padmanabhan p. 3). In her discussion of doing work within her own community, she describes moments where the person she was talking to simply did not share what they knew.  Even if it was something “everyone knew”–it remained unspoken. And she, as an ethnographer and a Mohawk, joined in that refusal and did not write that information down, rejecting the assumption that anthropological knowledge requires the right to know everything.   

Think of any people among whom anthropologists want to do work, or on whose land archaeologists want to dig.  They have the right to refuse. They have the right to say No. And anthropologists historically have a difficult time with that, and continue to need to work on recognizing and respecting ethnographic refusal. 

Simpson suggests that there is a great deal that is generative about refusal, and theories of refusal–what we can learn from the limits that are indicated by refusal?

In 1997 I was still doing my own anthropological fieldwork in Northern Ireland, and this book by Begoña Arextaga came out.  The blanket protests in the H-blocks of Northern Ireland from 1976-1981) were an example of refusal.  Republican and Nationalist men who were “on the blanket” were refusing their assigned (by the British State) status of criminals, and asserting their status of political prisoners, protesting the removal of the Special Status that defined them differently from criminals by refusing and rejecting regular prison uniforms.  These protests ended when Thatcher’s government reinstated Special Status but only after the deaths of the hunger strikers, including Bobby Sands, in 1981. Arextaga’s focus on the political tactics of Nationalist women in Northern Ireland, including those who themselves participated in blanket protests, reveals not just their refusal of the status of common criminals, but a further rejection of the idea that as women they could not be political prisoners, or active participants in Nationalist/Republican struggles at all.

Refusal is an action, not just a lack of action.  It is exercising agency, not just “non compliance.”  So, faculty/academic staff refuse to use systems, such as an LMS/VLE, or lecture capture, refusing and rejecting the premise that they and their expertise can be reduced to a piece of content like a lecture, or a cache of powerpoint slides.

These choices are not about inability, or digital skills or capability.  These choices are made because of people’s concerns about how their labor can be exploited, taken advantage of, made invisible or redundant.  They are refusing in a context of lack of trust, precarious labor, and a de-valuing of academia and academic work.

This is the point where I remind you that the Luddites were not anti-machine, and I would point particularly to Audrey Watters’ discussion of the Luddites and their frequently misrepresented agenda here.  The act of the Luddite “isn’t about rejecting technology; but it is about rejecting exploitation (Watters 2014).”  Luddites broke machines in protest against factory practices that devalued and erased their labor.

To what extent is edtech a “Captivating Technology “ (to quote Dr. Ruha Benjamin in her introduction to her 2019 edited volume)–a technology of domination that embeds and fossilizes and perpetuates racial, economic, and other inequalities in the name of technosolutionist “neutral” fixes.  Benjamin argues we need “ethical engagement with technoscience, where the zeal for making new things is tempered by an ability to listen to the sounds and stories of people and things already made.(9)” 

Benjamin asks, “How, then, might we craft a justice-oriented approach to technoscience? It starts with questioning breathless claims of techno-utopianism, rethinking what counts as innovation, remaining alert to the ways that race and other hierarchies of difference get embedded in the creation of new designs, and ultimately refashioning the relationship between technology and society by prioritizing justice and equity.” (11)

Education technology is still technology.  People generate systems of classification to contain and control, and we need to ask, what racialized logics are embedded in the ways we point systems at students with concerns for their “success?”  Or that require staff compliance with edtech systems in the name of consistency, or quality control? Do we assume there aren’t any such logics? 

Do we assume or insist that “they can trust us?”  We do that at our peril, and theirs too, especially in a larger context where vulnerable students and staff are already under surveillance, where technology is implicated and embedded in the ways that race, gender, and class are produced and reinforced.  What reasons do students have to trust, given that context? Representatives of institutions cannot simply say “trust me” and have that come to pass.

We can find examples of refusal in specifically educational contexts, too.  The recent UIC graduate student strike is a refusal to work until the material conditions and their labor contracts (especially their pay, and health care provisions) were improved, in an overwhelming context of lack of trust in institutions, and overall economic and political precarity.

An archivist at Hollins University, in Virginia, USA, refused to withdraw examples of students in blackface in yearbook pictures in the university archives.  They did not trust the motives of their institution in removing those images, and called it out publicly on social media.

A group of faculty members at Yale withdraw their labor from the Ethnicity, Race, and Migration program because of a historical lack of resources and other structural support, as well as insufficient institutional recognition of their labor.  Dr. Tricia Matthew, at the time, highlighted that the problem was in part one of classifying labor as a “service,” something antithetical to robust program-building at universities.  Recently Yale seems to have made assurances that new structural support will be made available to ER&M, and faculty members have “recommitted” to the program.

When we pay attention to the refusals of students and faculty, we learn more about what is at stake, and what is actually happening.  We also need to ask, if people cannot refuse, what does it mean? 

Do we want to define education as control and compliance, rather than growth?

What are the limits of refusal?  What does that tell us about power and the structures we have to navigate?  

And there are many things we should be refusing:

  •  Quantification
  •  Employability narratives
  •  Tracking and Surveillance
  •  Technocentrism
  •  “More with Less”

Those things are emerging from the wrong way to frame education, if we value it as a form of social justice (and we should).

The framing of education as a place to sell more tech, as a potential market for a home-grown edtech silicon valley, rather than a common good to be opened up to as many people and practices as possible, this framing is a political act

The narrowing of education to a credential that gets you a job is a political act

I have mentioned the tactics of refusal–in discussions of agency, and notions of what people can do in their given contexts, it’s useful to remember and incorporate deCerteau’s definitions of tactic vs. strategy

Tactical refusal comes from a position of no power.  People will be exerting what agency they can, and we can learn from tactical refusals, seeing them as ways of communicating as well as trying to survive

So then strategic refusal would come from a position of power, but one that acts to dismantle current structures of power on behalf of powerless people.  Those of you who have power, what refusals can you make on behalf of the people who work for you, or for your students? How can we create situations where it’s possible for more people to refuse strategically (as in a strike, as in collective action?)

I want to emphasize again the importance of power structures in definitions of  refusal–we need to recognize that those with less power are the ones who are doing the refusing, the rejecting of the structures that disempower, misrepresent, and potentially victimize them.  

As Dr. Sara Ahmed notes:

“A struggle against power is a struggle for a right to no, a right not to agree with what you are asked to do or to be.”

What does any of this have to do with Education technology?

When people refuse (for example) to use the VLE/LMS, capture their lectures, or take attendance with digital tools, very often the institutional response is 1)  “they aren’t capable, we should do more training” or 2) “We need to make them comply, or some combination of 1 and 2.

The lens of refusal gives us option 3)  “they have reasons for saying no.”

This appeals to me, an anthropologist, as I am a big fan of my discipline’s conviction that there is an underlying logic to the behavior of people.  Even if it’s not immediately apparent to the observer.

The correct response therefore isn’t “How can we make them comply” but “Why are they refusing?  Have we done something wrong?”

And then you FIND OUT.

I gave a talk once where I cautioned libraries not to invite anthropologists into their midst if the reasons they wanted to learn about people was to make them do the “right thing” in the library.  The right way to go is to invite anthropologists to help libraries think critically about their practices, and change those practices so that people’s myriad needs can be more effectively met.

Not prediction.

Not persuasion.

Recognition.

Recognize the refusal.  Recognize it as evidence that something is wrong with what you are doing, as an institution.  Possibly the wrong is outside of your institution, but erupting within it (like student homelessness.  Like lack of access to mental health care. Like lack of funding for higher and further education). Take heed in Dr. Sara Ahmed’s reminder that the person who says no, the person who registers a complaint, is far too often framed as the problem, rather than seeing the thing they point to or refuse as the problem

Then your actions cannot just be about pedagogy and systems, but must be about politics and policy.

We, the people in the (APT) room, are trying to enhance, improve, change the practices we see. We use lots of change management approaches, we use technology and there is a tendency to see resistance and refusal as a way of disengaging, or as evidence of incapability. But most of the people I have worked with, and interviewed, or taught with, when they get to the point of refusal it is because of none of these things.

I would point to the example of the government (in particular the Prime Minister) of New Zealand trying to define the value of their economy not around growth, but around well-being.  What if, instead of caring so much about growth of tech sector, or compliance with uses of technology within institutions, we cared about well-being of our students and staff?  What would that look like?

We need to stop seeing refusal as evidence that there’s something wrong with the people doing the refusing.  We need to see refusal as evidence that there is something wrong that they are communicating about, something wrong with the systems they are being presented with, with the structures in which they are placed.  And then we need to take responsibility for changing things. Value the people who refuse, because it is from those people that you can learn, and then work to build a more effective, more powerful set of practices within your institution.

Further Reading:

WOC Faculty (2018) “A Collective Response to Racism in Academia” Medium, May 8, https://medium.com/@wocfaculty/a-collective-response-to-racism-in-academia-35dc725415c1

Ahmed, Sara, (2017) “No” feministkilljoys, June 30, https://feministkilljoys.com/2017/06/30/no/

Browne, Simone. Dark matters: On the surveillance of blackness. Duke University Press, 2015.

Department for Education (2019). Realising the potential of technology in education: A strategy for education providers and the technology industry.  https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/791931/DfE-Education_Technology_Strategy.pdf

Lanclos, D., & Phipps, L. (2019). Trust, Innovation and Risk: a contextual inquiry into teaching practices and the implications for the use of technology. Irish Journal of Technology Enhanced Learning, 4(1), 68 – 85.  https://journal.ilta.ie/index.php/telji/article/view/53

Mann, S., Nolan, J., & Wellman, B. (2003). Sousveillance: Inventing and using wearable computing devices for data collection in surveillance environments. Surveillance & society, 1(3), 331-355.

Matthew, P. A. (Ed.). (2016). Written/unwritten: Diversity and the hidden truths of tenure. UNC Press Books.

Mengesha, L., & Padmanabhan, L. (2019). Introduction to Performing Refusal/Refusing to Perform. Women & Performance: a journal of feminist theory, 1-8.

Rahman, Zara, (2019) “Can data ever know who we really are?” Deep Dives, Medium, May 15.  https://deepdives.in/can-data-ever-know-who-we-really-are-a0dbfb5a87a0

Benjamin, R. (Ed.). (2019). Captivating Technology: Race, Carceral Technoscience, and Liberatory Imagination in Everyday Life. Duke University Press.

Simpson, A. (2007). On ethnographic refusal: indigeneity,‘voice’ and colonial citizenship. Junctures: The Journal for Thematic Dialogue, (9).

Watters, Audrey. (2014) “Ed-tech’s Monsters”  Hack education, Sept 3, http://hackeducation.com/2014/09/03/monsters-altc2014

View of the Pride Flag flying from Senate House, with my favorite Apologetic Building on Russell Square in the foreground, also the view from post-conference drinks July 1st (photo by me)