Tag Archives: dark surveillance

Gaining insight over fixing problems: how open ended research can teach us what we need to know

November sunset in Guelph

I was so pleased to be invited to the University of Guelph library by Karen Nicholson and Ali Versluis to give a talk and also to talk with people in the library about user experience and ethnographic research in library and education contexts. This was the last talk that I gave during my November Tour, and I think it came together the most solidly of the four (there’s something to be said for the repetition of experiences in getting things right, note to self). I would also like to thank Chris Gilliard for reading early drafts of this, and helping me clarify some of my argument. Thanks to Jason Davies for the Mary Douglas citation. And credit as well to Andrew Asher, who was my research partner in some of the work I talk about here.

I wrote this talk at my home, in what is now called North Carolina, in the settler-occupied land of the Catawba and Cherokee people.   I am a Cajun woman, and my people are a settler people from the Bayou Teche, on Chitimacha land in what is now called Louisiana.  

I want to  acknowledge here the Attawandaron people on whose traditional territory the University of Guelph stands and offer my respect to the neighboring Anishinaabe, Haudenosaunee and Métis.  

************************************

A few years ago, Andrew Asher and I were hired to do a project for an international non-profit that provides electronic resources to libraries in less well resourced countries.  The organization was aware that there were low use and high use institutions that they were providing resources for, and wanted to know why that difference was there.

So we interviewed people in Zambia, and in Kyrgyzstan, in places that this organization told us didn’t have connectivity issues.  While there might not have been connectivity issues on the university campuses, the practical experience of connectivity was not consistent, as people were not always on campus.  As researchers, we encountered this as a problem early on, for example not being able to use Skype for interviews because of connectivity problems. We ended up doing a mix of Skype to call mobile phones, and WhatsApp to conduct interviews in locations where the internet was not reliable for our participants. 

Among the things we found out, in the course of our research, was things like in Zambia,  people who wanted to have faster internet bought ISP “rabbits,” to gain access off campus. We interviewed a  PhD candidate in Engineering who made the point that unless you were on the university network (Eduroam), you could not use university materials (such as library resources).  Therefore, using the faster, more reliable (but more expensive) rabbit modems in Zambia locked students and staff out of their institutional resources.  

We interviewed a Lecturer in Education with similar issues, even though he was at a “high-use” institution.  It wasn’t that the subscriptions weren’t there, or the resources not theoretically available, but that connectivity made those resources less useful, as they were difficult to get to:

“Yes, like I was telling you, either you subscribe to some journal publisher and because of poor connectivity, you may not get access to those services.  So it’s basically attributed to poor connectivity. Not that the institution does not have the information, the information could be there but the connectivity limits us from getting access.  Cause the system gets to be slow.”

This scholar did point out that doesn’t happen too frequently, so he wasn’t going to complain too much about access.  But he highlighted what’s at stake when those failures happen: he can’t do his work.  

“Basically, I can just say that is it poor connectivity and when there’s poor connectivity and there’s something that I urgently need to confirm because like when I’m reading a journal article where somebody has cited somebody.  There are times when I actually need to read the other article or if it’s a book which they refer to so I’ll probably have to go online to download and if there is not connectivity then that becomes a problem.”

Our research revealed that use of resources (or lack thereof) wasn’t just about connectivity, it was also about culture, and the separation that scholars experienced from the people working in the library.  One librarian we spoke to made it clear that the levels of authentication that scholars found burdensome were there on purpose to make sure that only the right people could have access to them. That, however, translated to even the “right people” using those resources less, or not at all, preferring to spend their precious internet time on getting to resources that were more easily accessible, even if not institutionally provided.

In Kyrgyzstan, one scholar assumed that because the physical collection in the library was out of date and inadequate, the electronic resources would be, too.

So, scholars in these two countries, in both “high” and “low” use institutions according to the non-profit, acquired and shared resources via printing, email, and thumb drives more often (and more reliably) than getting resources online via the resources paid for and provided by the organization.  

The implications we drew out were as follows:

  • Providing materials “online” is not the same as providing “access” when the internet is not a sure thing.  Also, having a connection is not the same thing as being connected enough to make using online resources a feasible option. There are many barriers to accessing library materials that are outside of the library’s own systems and infrastructure.  
  • Scholars find what they need, and what is accessible–if they Google something and it’s closed-access, they move on until they find something they can use.  The existence of the materials does not necessarily translate into its use.  
  • The disconnect of the library from the research workflow of the scholars interviewed here was striking, especially in the context of their awareness for the need for training, and knowledge about how to better navigate useful resources.  For example, one Lecturer in Education was at her current institution for 4 years before she knew about electronic resources, and then it wasn’t until she had started her PhD studies at another institution.

And our recommendations were things like:  pay attention to physical infrastructure when you offer online resources to institutions.  Consider offering resources in digital forms that aren’t just online. Think about facilitating more networking and connections between the people in the library and their surrounding community of scholars.  Basically, we told them context matters, and that the non-profit, in providing online resources, was operating as if they were in a vacuum.

Our report had to do with infrastructure, economics, and the lives of the scholars (faculty and students)–The non-profit wanted a problem to fix, and in many ways that was reasonable–it cost money for them to provide these resources, and wanted to avoid wasting resources.  What we as researchers presented them with was an exploration of the contexts in which the people they were trying to help (via libraries) were restricted in what was or wasn’t possible.  

We did not provide them with a quick-fix solution.  In many ways, the questions they wanted to ask were inevitably going to have disappointing answers.  

And well, the qualitative work we did wasn’t satisfying, short-term, but I think it’s important nonetheless.

Why was our research unsatisfying? Well, to some extent, the reason is the culture of libraries.  

I will point again to the article “Ethnographish” that Andrew and I wrote.   We wrote it in a moment, several years into our collective work as anthropologists working in libraries, where we wanted to try to think critically about why the work we were doing looked the way it did.  And also why particular kinds of work (especially open-ended exploratory ethnography) was so hard for us to do.  

Our argument is:  open-ended exploratory research is a hard sell in libraries.  We see UX research not just because it’s useful, but because it’s finite, and in particular because it’s proposing to solve specific problems.

“Libraries are notoriously risk averse. This default conservative approach is made worse by anxiety and defensiveness around the role of libraries and pressures to demonstrate value. Within this larger context, where the value of libraries is already under question, open-ended, exploratory ethnographic work can feel risky.“ (Lanclos and Asher 2016)

I think that in positioning themselves as problem-solvers, libraries and library workers are positioning themselves in a tactical way.  DeCerteau’s distinction here between kinds of agency (tactical vs. strategy) is useful here, helping us think about the kinds of actors who are allowed choices given their structural position.  To what extend to libraries and library workers get to make decisions that aren’t just tactical, not just reactions to situations? How and when do libraries and library workers get to make strategic decisions?  Because that has to be more than just responding to demands and solving problems.  

A while ago I gave a talk at a CUNY event that advocated for the mixed-methods library.  Lots of assessment departments talk about (and some do) both qualitative and quantitative (though I still stand by my impression that a lot of qualitative stuff is UX-style “what is the problem” approaches.).  I gave that talk in 2014, and at the time, part of what I was pointing to was the need to get insights that numbers would not give us.  

For example, I worked with a university that participated in the Measuring Information Service Outcomes survey.  Some of the bar charts we can generate from this data look like this:

 We have all of these numbers, what do they mean?  What does “satisfied with the library” mean, anyway?  Can graphs like these tell us anything?  

In that talk 2014 I actually said “I don’t[ want to get rid of quantitative measures in libraries” but now in 2019 (and actually, way earlier than that) I decided it wasn’t my job to advocate for quantitative anything, and not just because lots of other people are already advocating for that.

Because now in 2019,  quantification and problem fixing orientations have landed us with learning analytics, and library analytics, and I think there’s a lot more at stake than “these bar charts don’t tell us enough” (which was bad enough).  We have arrived here in part because somewhere along the way arguments accompanied by numbers were interpreted as Most Persuasive (I think we get to thank Economists, as a discipline, for this, given their infiltration into popular news media as commentators).  

Being able to categorize people also feels like a constructive action, a first step towards knowing how to “help” people (and categories are certainly central to particular practices in librarianship, and yeah they come with their own troubled history, as anyone who’s read critical work on LOC or Dewey systems will attest).  

So let’s think about the impact of categorizing and quantifying academic work, including the work of libraries.  Let’s think about what we are doing when we put people into categories, and then make decisions about capability based on that.  And yeah. Pop culture quizzes, and even sometimes those management personality tests can be fun.

Where it all ceases to be fun is when decisions get made on your behalf based on the results.

Frameworks and quizzes and diagnostics (what I like to call the “Cosmo Quiz” school of professional development) are often deployed with the result that people decide what “type” they are to explain why they are doing things.  Pointing to individual “types” and motivations provides an easy end-run around organizational, structural, cultural circumstances that might also be the reasons for practice. Because then when there are problems, it is up to the individual to “fix it”

What are we doing when we encourage people to diagnose themselves, categorize themselves with these tools?  The underlying message is that they are a problem needing to be fixed (fixes to be determined after the results of the questionnaire are in)

The message is that who they are determines how capable they are.  The message is that there might be limits on their capabilities, based on who they are

The message is that we need to spend labor determining who people are before we offer them help.  Such messages work to limit and contain people, rather than making it easy for people to access the resources they need, and allow themselves to define themselves, for their identity to emerge from their practice, from their own definitions of self.

When UX workers use personas (another way of categorizing people) to frame our testing of websites, we have capitulated to a system that is already disassociated from people, and all their human complexity.  The insidious effect of persona-based arguments is to further limit what we think people are likely to do as particular categories. Are first year students going to do research? Do undergraduates need to know about interlibrary lending?  Do members of academic staff need to know how to contact a librarians?  Why or why not? If we had task-based organizing structures in our websites, it wouldn’t matter who was using them.  It would matter far more what they are trying to do.  

I am informed in this part of my argument by  anthropologist Mary Douglas on How Institutions Think, and in particular that institutions are socially and culturally constructed, and that they themselves structure knowledge and identity.  Douglas’ work allows us to think of personas and other kinds of personality test-categories as “patterns of authority”, not just ways of trying to make things clear, but as ways of reifying current structural inequalities, and categories that limit people and their potential. When institutions do the classifying the resulting patterns are authoritative ones, the profiles that suggest plans of action come at the expense of  individual agency, and implies that the institutional take on identity is the definitive one that determines future “success.” 

What are the connotations of the word “profile?”  If you have a “profile” that is something that suggests that people know who you are and are predicting your behavior.  We “profile” criminals. We “profile” suspects. People are unjustly “profiled” at border crossings because of the color of their skin, their accent, their dress. 

“Profiles” are the bread and butter of what Chris Gillard has called “digital redlining:” ”a set of education policies, investment decisions, and IT practices that actively create and maintain class boundaries through strictures that discriminate against specific groups.“  His work is at “the intersections of algorithmic filtering, broadband access, privacy, and surveillance, and how choices made at these intersections often combine to wall off information and limit opportunities for students.”

“Now, the task is to recognize how digital redlining is integrated into technologies, and especially education technologies, to produce the same kinds of discriminatory results. (Gilliard and Culik 2016) “

Chris gave in his recent Educause talk some examples of what he calls “EdTech WTF moments”

  • “Facemetrics tracks kids’ tablet use. Through the camera, patented technologies follow the kids’ eyes and determine if the child is reading, how carefully they are reading, and if they are tired. “You missed some paragraphs,” the application might suggest.
  • In a promotional video from BrainCo, Students sit at desks wearing electronic headbands that report EEG data back to a teacher’s dashboard, and that information purports to measure students’ attention levels. The video’s narrator explains: “School administrators can use big data analysis to determine when students are better able to concentrate (Gilliard 2019).”

One problem is that it’s possible to extract quantified behavioural data from systems, in a context (e.g., libraries) where quantified data is perceived as most persuasive 

What gets lost in quantification is not just the Why and How (quantification is really good with the What, and occasionally Where), but also the privacy, safety, and dignity of the people whose data you are extracting.  This is a “just because you can doesn’t mean you should” situation, especially when we consider our responsibility to people who are already over-surveilled, hypervisible, and structurally vulnerable (i.e., Black, brown, and Indigenous people)

Let’s look at this Guardian article, on student surveillance, and here I’m guided again by Chris Gilliard’s deep dive on this article

Basically, companies like Bark and Gaggle are using school worries about liability around school shootings and student suicides and bullying as a lever by which they gain access to the schools.  They sell “security” when what they are actually peddling is “surveillance.”  

In this article none of the concerned parties are talking about gun control, or human systems of care that can deal with mental health issues, address discrimination against LGBTQ+ kids, racial bias, and so on.  The companies are selling results that are not borne out by the research they hand wave towards. They are counting on people being too scared not to engage with these systems, because they feel helpless

(sound familiar?)

Read the damn thing yourself too, it’s terrifying to me: https://www.theguardian.com/world/2019/oct/22/school-student-surveillance-bark-gaggle

And of course It gets worse–as I was writing this talk a bill was introduced by US Republican senators to make school engagement with this tech (and these tech companies) MANDATORY.

Thanks to Chris Gilliard and his work, I am also aware of Simone Browne’s work Dark Matters: on the Surveillance of Blackness.  In this book, she writes a black feminist, critical race studies informed take on surveillance studies.  She points particularly to the history of surveillance technology as being one that emerges from the white supremacist  need to police black people, black bodies. Her examples include enslavement trading practices of the 1800s, the tracking and control of enslaved people via paper permits and laws about carrying lanterns after dark, and she makes it clear that this history is relevant to current discussions of how we make people visible, in what circumstances, and why.  We cannot disentangle race and inequality from our discussions of these technologies, nor should we try to in a quest for “neutrality” or “objectivity.”

The surveilling gaze is institutionally white, and furthermore, as Browne demonstrates in her book, that the technologies and practices of surveillance have a deep history in the colonization and enslavement of black and indigenous people.  The history of current surveillance practices involves the production and policing of racialized categories of people, in particular blackness and black people, so that they can be controlled and exploited.  

We need to think too about the racist context in which data is generated and collected, as in the case with health care data used to generate algorithms intended to guide health care decisions.   In Ruha Benjamin’s perspective piece in that same issue of Science, she notes that researchers “found that because the tool was designed to predict the cost of care as a proxy for health needs, Black patients with the same risk score as White patients tend to be much sicker, because providers spend much less on their care overall. “

While surveillance and tracking are clearly forms of control, and the use of algorithms is a problem, their use is often framed as care (again, see the people interviewed and quoted in the Guardian article, and this is an argument I hear in library contexts too, “we need the data to care for students and faculty.”)

Insisting that people have to participate in systems that harvest their data to have access to education or health care is a kind of predatory inclusion.  

“Predatory inclusion refers to a process whereby members of a marginalized group are provided with access to a good, service, or opportunity from which they have historically been excluded but under conditions that jeopardize the benefits of access. Indeed, processes of predatory inclusion are often presented as providing marginalized individuals with opportunities for social and economic progress. In the long term, however, predatory inclusion reproduces inequality and insecurity for some while allowing already dominant social actors to derive significant profits (Seamster 2017).”

When people become aware that they are under surveillance, there can be a ”chilling effect” where they do not engage with the system at all.  This is refusal, not engaging with the system because of wariness of what might happen if they do.  We need to consider carefully the disparate effect some of these methods of surveillance may have on trans students, undocumented students, and other vulnerable populations.  

Our role as educators, as workers within education, should be to remove barriers for our students and faculty (and ourselves), not give them more.

We also need to think critically about whether the systems we are extracting data from accurately reflect the behaviors we are interested in.  For example, borrowing histories, swipe card activity records, and attendance tracking are all proxies for behaviors, not direct observations, and not necessarily accurate representations of behaviors (even as they might seem precise, and make us feel good about our precision biases).

And if you are worried about “How will we know…X” please do not assume that these systems are the only way.  Because the vendors selling these systems that collect this problematic data want you to THINK that it’s the best and only way to find things out.  But that is not true.  

The fight against quantification, pigeonholing, surveillance and tracking should include qualitative research engagement –like the stuff that I do, like the stuff I try to write about and train people to do, and encourage them to try–engagement with the people from whom we want to learn, and with whom we want to work.  I would even suggest that the lack of “scalability” of qualitative methods is a benefit, if what we want is to be able to push back against surveillance and automated systems.

It’s about more than being able to be strategic on behalf of libraries and library workers, but also being able to create space for students and faculty to be strategic, to exercise power and agency in a context that increasingly wants to remove that, and put people at the mercy of algorithms.  This is particularly dangerous for already vulnerable people–Black and brown, Indigenous, women, LGBTQ+ people. Exploratory ethnographic approaches, engaging with people as people (not as data points) gives us not just more access to the whys and hows of what they are doing, but can work to connect us with them, to build relationships, so that we don’t have to wonder for long “why are they doing that.”  Then we won’t have to listen to people who rely on machines and their broken proxies for human behavior and motivations.  

Further Reading and Resources

LIBRARY TRENDS, Vol. 68, No. 1, 2019 (“Learning Analytics and the Academic Library: Critical Questions about Real and Possible Futures,” edited by Kyle M. L. Jones), © 2019 The Board of Trustees, University of Illinois

Benjamin, Ruha, “Assessing risk, automating racism,”  Science 25 Oct 2019: Vol. 366, Issue 6464, pp. 421-422.  DOI: 10.1126/science.aaz3873 https://science.sciencemag.org/content/366/6464/421.full

Browne, Simone. Dark matters: On the surveillance of blackness. Duke University Press, 2015.

de Certeau, Michel, and Steven Rendall. The Practice of Everyday Life. University of California Press, 2011.

Douglas, Mary. How institutions think. Syracuse University Press, 1986.

Gilliard, Chris “Digital Redlining”  featured session, EDUCAUSE conference, Chicago, October 16, 2019.  https://events.educause.edu/annual-conference/2019/agenda/digital-redlining

Gilliard, Chris and Hugh Culik “Digital Redlining, Access and Privacy”  Privacy Blog, Common Sense Education, May 24, 2016, https://www.commonsense.org/education/privacy/blog/digital-redlining-access-privacy 

Lanclos, Donna, and Andrew D. Asher. “‘Ethnographish’: The State of the Ethnography in Libraries.” Weave: Journal of Library User Experience 1.5 (2016).  https://quod.lib.umich.edu/w/weave/12535642.0001.503?view=text;rgn=main

Obermeyer, Ziad, and Sendhil Mullainathan. “Dissecting Racial Bias in an Algorithm that Guides Health Decisions for 70 Million People.” Proceedings of the Conference on Fairness, Accountability, and Transparency. ACM, 2019. https://science.sciencemag.org/content/366/6464/447 

Safiya Umoja Noble. Algorithms of Oppression: How search engines reinforce racism. NYU Press, 2018.

Seamster, Louise, and Raphaël Charron-Chénier. “Predatory inclusion and education debt: Rethinking the racial wealth gap.” Social Currents 4.3 (2017): 199-207. https://journals.sagepub.com/doi/abs/10.1177/2329496516686620?journalCode=scua

Watters, Audrey. (2014) “Ed-tech’s Monsters”  Hack education, Sept 3, http://hackeducation.com/2014/09/03/monsters-altc2014

Listening to Refusal: Opening Keynote for #APTconf 2019

Me delivering this talk , thanks to notes printed out at the last minute by Steve Rowett (thank you, Steve!) (photo by Lawrie Phipps)

On July 1st I had the great pleasure of delivering the opening keynote address to the APT Conference.  Before I try to represent my talk here, I need to thank the conference team, and especially Jason Davies, who contacted me last year to see if I would be interested in speaking at the event.  And I was, and I did, and I was glad to be there. When I got up to give this talk, I thanked the people in the room, and said “I hope I make you very uncomfortable.” I suppose the conference feedback will indicate whether or not I was successful.  (by the way, the slides and speaking notes for this talk are here. )

 In April 2019, right about the same time that I was thinking about what I wanted to say at APT, a report from the UK Department of Education came out, titled “Realising the Potential of Technology in Education:  a strategy for education providers and the technology industry”. 

This government document is to set the vision for the use of technology in education (specifically in England, but with implications for the rest of the UK).  So I wondered at its approach, but did not do so for long, as its emphasis was clear from the table of contents.  

This report centers the needs and desires of the tech industry.  It trades in deficit models, starts from the assumption that there’s not enough technology in educational contexts, and that more tech is the answer to “drive change”  

Words with the root “innov” (innovate, innovation, innovating, innovative) show up 43 times in this 48 page document.  Section 6 in particular gives the game away, with quite detailed concerns about the health and well-being of the edtech business sector in England, and the need for the industry to have streamlined access to education and educators. 

 The word “procurement” shows up 13 times, but “pedagogy” is nowhere in this report.  

The DfE report came out just after Lawrie Phipps and I had presented on findings from work we had carried out in 2018-19, on the teaching practices of lecturers in HE and FE.  We released this report at Jisc’s Digifest in March, the same month that our article on this same work was published in the Irish Journal of Technology Enhanced Learning.  I’ve discussed the broad outlines of this research elsewhere in the blog (and if you like you can watch our presentation on our approach and methods here)–for the purposes of this talk, I wanted to focus on the way we framed the work, and contrast it to the DfE report, because the research that Lawrie and I did seems to me the antithesis of that government document.  While that report started with technology, and assumed that there wasn’t enough of it, Our assumptions were as follows:

  • People who teach have practices that involve digital.  
  • People have expertise, and make reasoned decisions around what to do and not do. 

In our approach to our project we did not start off asking about technology (even though our research questions definitely were about technology in teaching and learning contexts).  We started off asking about teaching.

Among the themes that emerged in our interviewees’ discussions about technology were the barriers and enablers to the uses of of that tech.  Nowhere in these barriers were “lack of access to education technology markets.” There were plenty of barriers that were human, and organizational.  Time, priorities, values, relationships, and trust (or lack thereof) all informed the extent to which people did or did not engage with technology, both institutionally provided, and otherwise.  

It was also made clear over the course of our research that there were things being done with technology that  were not particularly “innovative” (e.g., lectures, grading, depositing materials for consumption). During our analysis, when thinking about barriers to technology use and in particular to “innovation” we found that practitioners were struggling with the disconnect between what they need to do in the spaces their institution provides, and what is possible–before they ever get to what they want to do, or what they might not know about yet.  

In institutional contexts where people do not have the time, organizational support, or access to resources that would allow for exploration around new tech, or using old tech in new ways, it’s not hard to see why “innovation” is hard to come by.  And also easy to see that “more tech” or “use the tech more” or even “create a market more friendly to vendors” isn’t going to produce more innovation. Or, more effective teaching and learning contexts.

We have encountered, over the course of this research and also in the other work we do in the sector, a distinct lack of compliance around certain kinds of education technology.

For example:

Lecture Capture

We witnessed and heard about a lack of participation in lecture capture, in people not wanting to do it, citing concerns about labor exploitation and picket-line crossing, and even expressing fears of the wholesale replacement of lecturers with captured content.

VLE/LMS

We spoke to and also heard about academic staff who keep a minimal presence in the learning management system (course content, syllabi, calendars), but who engage in their actual teaching practices in digital contexts outside of institutional control.

Card Swipes

For this example, I told the story (shared with her permission) of a student who studied abroad as a part of her degree.  This experience led to a full time job before she had finished her time at university, and that job also made it financially possible for her to complete her university degree.  In her final year there was a conflict between (required) attendance in class and the times she needed to be on site at work. Her department had recently instituted card-swipes to track student attendance in class.  She worked with her head of department to get permission to not always be in class, and with that permission was “swiped in” by a classmate to satisfy institutional requirements.  

I have told elsewhere the story of students engaging in an elaborate ID card charade to get a non-student into the library space they wanted to study together in–in the end, four students went into the library, and the ID system only recorded three of their own students, not the fourth unaffiliated one.

An inordinate managerial focus on Compliance makes it hard to see actual practices.  The examples I list above show us that if we mistake what is reflected in the VLE/LMS, card-swipe systems, and only the lectures that are recorded for the holistic reality of teaching and learning practices, we are terribly wrong.  

Our “precision bias” means that the numbers given to us via card swipes and attendance records feel far more accurate than they actually are.  Knowing the behaviors that give us these numbers means we cannot trust them as proxies for what we want them to be. Attendance numbers don’t actually tell us much about the engagement of students with their courses of study.  Course content placed in institutional online places doesn’t necessarily reflect actual teaching practices. Card swipes in libraries that don’t represent who is actually in the building at any given time.  

One overarching message in these stories, and in the research project overall, was that lack of trust can be corrosive.  Not being able to trust your institution with your actual practices means that you don’t share, and they don’t know, what you are doing.

I gave a brief presentation earlier this year about our research findings around non-classroom digital spaces and practices.  After talking about the ways that instructors engaged with students in non-classroom non-LMS/VLE digital places, the main question I was asked was “How can we make them use the LMS?”

Too often the institutional response is concerned with compliance, and furthermore assumes that if people are not complying, perhaps it’s because they don’t know how to do the “thing.”  So then we end up with lots of workshops and webinars about How To X. How to embed your gradebook into Canvas. How to upload captured lectures into Moodle. How to take attendance using clickers or card swipes.  

I have been reading Dr. Simone Browne’s Dark Matters:  on the Surveillance of Blackness.  In this book, she writes a black feminist, critical race studies informed take on surveillance studies.  I was familiar with surveillance (being closely observed, especially by an institutional power such as police or military, but increasingly by corporations, and any entity with access to the stream of data we leave in our wake these days), but unfamiliar with Steve Mann’s concept of sousveillance, which he describes as a way of “enhancing the ability of people to access and collect data about their surveillance and to neutralize surveillance (61)”

So, an example of surveillance tech would  be CCTV. An example of sousveillance would be using cameras in your smart phones to film the police during a protest.  

Dr. Simone Browne introduced me to the idea of dark sousveillance:  a way to situate the tactics employed to render one’s self out of sight (Dark Matters p. 21 in the Kindle Edition.)  In particular she is theorizing and describing the means by which racialized people avoid being seen, so that they cannot be victimized by the structures and practices of surveillance.  An example of such behavior would be publicizing where the cameras are, so that you can avoid them.

Central to the idea of dark sousveillance is the fact that the surveilling gaze is institutionally White, and furthermore, as Browne demonstrates in her book, that the technologies and practices of surveillance have a deep history in the colonization and enslavement of Black and indigenous people.  The history of current surveillance practices involves the production and policing of racialized categories of people, in particular blackness and black people, so that they can be controlled and exploited.  

Dark sousveillance is a refusal of the power structures of surveillance.  I am helped in making this connection with the work of Lilian G. Mengesha and Lakshmi Padmanabhan, who define refusal as “what we conceive of as disruptions to the vicious dialectic of assimilation and resistance” 

So in thinking again about surveillance, we can see that  assimilation would be having an Alexa in your house. Resistance would be hacking Alexa to observe only when you want it to.  Refusal is not having any such device in your house at all.  

The options of assimilation vs. opposition are still in reference to a given system, such as systems of gender relations, racial identity, and economic class.  Think of the revels of Mardi Gras, that serve to strengthen the message that you should observe Lent. The presence of The Fool requires that of a Monarch. There are fundamental assumptions and premises, hegemonies that are shot through these systems.  

Refusal is not participating in those systems, not accepting the authority of their underlying premises.  Refusal happens among people who don’t have access to structural power. Refusal is a rejection of framing premises.  Recognizing refusal requires attention, and credit to tactics such as obfuscation, or deliberate misinterpretation.  

“The tactics of refusal include:  illegibility, opacity, and inaction” (Mengesha and L. Padmanabhan 2019)

In making this argument about refusal, I want to point to some examples of what I mean.

Ethnographic refusal has been defined by Dr. Audra Simpson (an anthropologist and member of the Kahnawake Mohawk–Haudenosaunee people)  as “a willful distancing from state-driven forms of recognition and sociability” (2014)  (cited in L. G. Mengesha and L. Padmanabhan p. 3). In her discussion of doing work within her own community, she describes moments where the person she was talking to simply did not share what they knew.  Even if it was something “everyone knew”–it remained unspoken. And she, as an ethnographer and a Mohawk, joined in that refusal and did not write that information down, rejecting the assumption that anthropological knowledge requires the right to know everything.   

Think of any people among whom anthropologists want to do work, or on whose land archaeologists want to dig.  They have the right to refuse. They have the right to say No. And anthropologists historically have a difficult time with that, and continue to need to work on recognizing and respecting ethnographic refusal. 

Simpson suggests that there is a great deal that is generative about refusal, and theories of refusal–what we can learn from the limits that are indicated by refusal?

In 1997 I was still doing my own anthropological fieldwork in Northern Ireland, and this book by Begoña Arextaga came out.  The blanket protests in the H-blocks of Northern Ireland from 1976-1981) were an example of refusal.  Republican and Nationalist men who were “on the blanket” were refusing their assigned (by the British State) status of criminals, and asserting their status of political prisoners, protesting the removal of the Special Status that defined them differently from criminals by refusing and rejecting regular prison uniforms.  These protests ended when Thatcher’s government reinstated Special Status but only after the deaths of the hunger strikers, including Bobby Sands, in 1981. Arextaga’s focus on the political tactics of Nationalist women in Northern Ireland, including those who themselves participated in blanket protests, reveals not just their refusal of the status of common criminals, but a further rejection of the idea that as women they could not be political prisoners, or active participants in Nationalist/Republican struggles at all.

Refusal is an action, not just a lack of action.  It is exercising agency, not just “non compliance.”  So, faculty/academic staff refuse to use systems, such as an LMS/VLE, or lecture capture, refusing and rejecting the premise that they and their expertise can be reduced to a piece of content like a lecture, or a cache of powerpoint slides.

These choices are not about inability, or digital skills or capability.  These choices are made because of people’s concerns about how their labor can be exploited, taken advantage of, made invisible or redundant.  They are refusing in a context of lack of trust, precarious labor, and a de-valuing of academia and academic work.

This is the point where I remind you that the Luddites were not anti-machine, and I would point particularly to Audrey Watters’ discussion of the Luddites and their frequently misrepresented agenda here.  The act of the Luddite “isn’t about rejecting technology; but it is about rejecting exploitation (Watters 2014).”  Luddites broke machines in protest against factory practices that devalued and erased their labor.

To what extent is edtech a “Captivating Technology “ (to quote Dr. Ruha Benjamin in her introduction to her 2019 edited volume)–a technology of domination that embeds and fossilizes and perpetuates racial, economic, and other inequalities in the name of technosolutionist “neutral” fixes.  Benjamin argues we need “ethical engagement with technoscience, where the zeal for making new things is tempered by an ability to listen to the sounds and stories of people and things already made.(9)” 

Benjamin asks, “How, then, might we craft a justice-oriented approach to technoscience? It starts with questioning breathless claims of techno-utopianism, rethinking what counts as innovation, remaining alert to the ways that race and other hierarchies of difference get embedded in the creation of new designs, and ultimately refashioning the relationship between technology and society by prioritizing justice and equity.” (11)

Education technology is still technology.  People generate systems of classification to contain and control, and we need to ask, what racialized logics are embedded in the ways we point systems at students with concerns for their “success?”  Or that require staff compliance with edtech systems in the name of consistency, or quality control? Do we assume there aren’t any such logics? 

Do we assume or insist that “they can trust us?”  We do that at our peril, and theirs too, especially in a larger context where vulnerable students and staff are already under surveillance, where technology is implicated and embedded in the ways that race, gender, and class are produced and reinforced.  What reasons do students have to trust, given that context? Representatives of institutions cannot simply say “trust me” and have that come to pass.

We can find examples of refusal in specifically educational contexts, too.  The recent UIC graduate student strike is a refusal to work until the material conditions and their labor contracts (especially their pay, and health care provisions) were improved, in an overwhelming context of lack of trust in institutions, and overall economic and political precarity.

An archivist at Hollins University, in Virginia, USA, refused to withdraw examples of students in blackface in yearbook pictures in the university archives.  They did not trust the motives of their institution in removing those images, and called it out publicly on social media.

A group of faculty members at Yale withdraw their labor from the Ethnicity, Race, and Migration program because of a historical lack of resources and other structural support, as well as insufficient institutional recognition of their labor.  Dr. Tricia Matthew, at the time, highlighted that the problem was in part one of classifying labor as a “service,” something antithetical to robust program-building at universities.  Recently Yale seems to have made assurances that new structural support will be made available to ER&M, and faculty members have “recommitted” to the program.

When we pay attention to the refusals of students and faculty, we learn more about what is at stake, and what is actually happening.  We also need to ask, if people cannot refuse, what does it mean? 

Do we want to define education as control and compliance, rather than growth?

What are the limits of refusal?  What does that tell us about power and the structures we have to navigate?  

And there are many things we should be refusing:

  •  Quantification
  •  Employability narratives
  •  Tracking and Surveillance
  •  Technocentrism
  •  “More with Less”

Those things are emerging from the wrong way to frame education, if we value it as a form of social justice (and we should).

The framing of education as a place to sell more tech, as a potential market for a home-grown edtech silicon valley, rather than a common good to be opened up to as many people and practices as possible, this framing is a political act

The narrowing of education to a credential that gets you a job is a political act

I have mentioned the tactics of refusal–in discussions of agency, and notions of what people can do in their given contexts, it’s useful to remember and incorporate deCerteau’s definitions of tactic vs. strategy

Tactical refusal comes from a position of no power.  People will be exerting what agency they can, and we can learn from tactical refusals, seeing them as ways of communicating as well as trying to survive

So then strategic refusal would come from a position of power, but one that acts to dismantle current structures of power on behalf of powerless people.  Those of you who have power, what refusals can you make on behalf of the people who work for you, or for your students? How can we create situations where it’s possible for more people to refuse strategically (as in a strike, as in collective action?)

I want to emphasize again the importance of power structures in definitions of  refusal–we need to recognize that those with less power are the ones who are doing the refusing, the rejecting of the structures that disempower, misrepresent, and potentially victimize them.  

As Dr. Sara Ahmed notes:

“A struggle against power is a struggle for a right to no, a right not to agree with what you are asked to do or to be.”

What does any of this have to do with Education technology?

When people refuse (for example) to use the VLE/LMS, capture their lectures, or take attendance with digital tools, very often the institutional response is 1)  “they aren’t capable, we should do more training” or 2) “We need to make them comply, or some combination of 1 and 2.

The lens of refusal gives us option 3)  “they have reasons for saying no.”

This appeals to me, an anthropologist, as I am a big fan of my discipline’s conviction that there is an underlying logic to the behavior of people.  Even if it’s not immediately apparent to the observer.

The correct response therefore isn’t “How can we make them comply” but “Why are they refusing?  Have we done something wrong?”

And then you FIND OUT.

I gave a talk once where I cautioned libraries not to invite anthropologists into their midst if the reasons they wanted to learn about people was to make them do the “right thing” in the library.  The right way to go is to invite anthropologists to help libraries think critically about their practices, and change those practices so that people’s myriad needs can be more effectively met.

Not prediction.

Not persuasion.

Recognition.

Recognize the refusal.  Recognize it as evidence that something is wrong with what you are doing, as an institution.  Possibly the wrong is outside of your institution, but erupting within it (like student homelessness.  Like lack of access to mental health care. Like lack of funding for higher and further education). Take heed in Dr. Sara Ahmed’s reminder that the person who says no, the person who registers a complaint, is far too often framed as the problem, rather than seeing the thing they point to or refuse as the problem

Then your actions cannot just be about pedagogy and systems, but must be about politics and policy.

We, the people in the (APT) room, are trying to enhance, improve, change the practices we see. We use lots of change management approaches, we use technology and there is a tendency to see resistance and refusal as a way of disengaging, or as evidence of incapability. But most of the people I have worked with, and interviewed, or taught with, when they get to the point of refusal it is because of none of these things.

I would point to the example of the government (in particular the Prime Minister) of New Zealand trying to define the value of their economy not around growth, but around well-being.  What if, instead of caring so much about growth of tech sector, or compliance with uses of technology within institutions, we cared about well-being of our students and staff?  What would that look like?

We need to stop seeing refusal as evidence that there’s something wrong with the people doing the refusing.  We need to see refusal as evidence that there is something wrong that they are communicating about, something wrong with the systems they are being presented with, with the structures in which they are placed.  And then we need to take responsibility for changing things. Value the people who refuse, because it is from those people that you can learn, and then work to build a more effective, more powerful set of practices within your institution.

Further Reading:

WOC Faculty (2018) “A Collective Response to Racism in Academia” Medium, May 8, https://medium.com/@wocfaculty/a-collective-response-to-racism-in-academia-35dc725415c1

Ahmed, Sara, (2017) “No” feministkilljoys, June 30, https://feministkilljoys.com/2017/06/30/no/

Browne, Simone. Dark matters: On the surveillance of blackness. Duke University Press, 2015.

Department for Education (2019). Realising the potential of technology in education: A strategy for education providers and the technology industry.  https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/791931/DfE-Education_Technology_Strategy.pdf

Lanclos, D., & Phipps, L. (2019). Trust, Innovation and Risk: a contextual inquiry into teaching practices and the implications for the use of technology. Irish Journal of Technology Enhanced Learning, 4(1), 68 – 85.  https://journal.ilta.ie/index.php/telji/article/view/53

Mann, S., Nolan, J., & Wellman, B. (2003). Sousveillance: Inventing and using wearable computing devices for data collection in surveillance environments. Surveillance & society, 1(3), 331-355.

Matthew, P. A. (Ed.). (2016). Written/unwritten: Diversity and the hidden truths of tenure. UNC Press Books.

Mengesha, L., & Padmanabhan, L. (2019). Introduction to Performing Refusal/Refusing to Perform. Women & Performance: a journal of feminist theory, 1-8.

Rahman, Zara, (2019) “Can data ever know who we really are?” Deep Dives, Medium, May 15.  https://deepdives.in/can-data-ever-know-who-we-really-are-a0dbfb5a87a0

Benjamin, R. (Ed.). (2019). Captivating Technology: Race, Carceral Technoscience, and Liberatory Imagination in Everyday Life. Duke University Press.

Simpson, A. (2007). On ethnographic refusal: indigeneity,‘voice’ and colonial citizenship. Junctures: The Journal for Thematic Dialogue, (9).

Watters, Audrey. (2014) “Ed-tech’s Monsters”  Hack education, Sept 3, http://hackeducation.com/2014/09/03/monsters-altc2014

View of the Pride Flag flying from Senate House, with my favorite Apologetic Building on Russell Square in the foreground, also the view from post-conference drinks July 1st (photo by me)