Monthly Archives: December 2019

Who Gets to Have Agency?

Trinity College, Hartford CT

I was pleased to have the chance to visit Trinity College, thanks to the invitation of Jason Jones.  I was asked to talk about “Agency” which I something I’ve been writing about and around most of this year, I think.  As usual this is my attempt to represent in writing what I said in the room.

****************************

Trinity College is located just west of the Kwinitekw River, within Wangunk homelands. The colonial city of Hartford occupies lands that were called Suckiaug, or black fertile earth in Algonquian. The river valley has sustained countless generations of Wangunk people, joined by indigenous communities from across the globe, including within Hartford’s Andean, Central American, and Caribbean communities. The land currently known as Connecticut is the territory of the Mohegan , Mashantucket Pequot, Eastern Pequot, Schaghticoke, Golden Hill Paugussett and Nipmuc Peoples,

I want to begin by telling the story of Bryan Short.  You should read his own account of his experiences with submitting and FOI request for his data as collected by the learning management system (LMS) at the University of British Columbia.  If you do nothing else, watch this video he produced on the collection of student data.  Bryan was kind enough to talk to me earlier this year about what happened in his final year at UBC. 

He was working at Digital Tattoo, which is a learning resource for students to teach about digital identity, and how what they do online might impact in the face to face world.  He had a supervisor who encouraged him to look at the LMS, asking questions specifically about what data gets collected, and how it might be used

Bryan was funded through the center for teaching, learning and technology (the part of the university responsible for the LMS).  He encountered people in the CLT who were encouraging a critical take, as well as people (in particular those managing the LMS directly) who clearly felt a bit defensive about his line of inquiry.  He also recalls people in central IT services who, while worried about speaking out themselves, encouraged Bryan to be critical.  

Bryan looked into the Terms of Use, and figured that the only way to learn comprehensively what was going on with his data was to submit an FOI request.  And this turned out to be a big hassle because they his University didn’t actually have a process in place–suggesting to him that the expectation was that no student would actually ask about their data in the first place.

Bryan suggested that the lack of process, what seems to be a lack of caring about students and their data, was actually a lack of disclosure and transparency.  

Over the course of trying to get his university to share what was happening with the data their systems were collecting on him, Bryan never felt comprehensively supported in his interrogation of the process.  He encountered people who saw learning analytics as a way to help students. When “more forthright” instructors helped him ask questions by showing him the LMS dashboard that instructors could use for participation, he took that information to administrators, who were dismissive about whether the data collected would be actually be used (which does suggest we should be asking why collect data that is unlikely to be used…)

Bryan’s experience was that UBC was pushing back against his requests.  They blew through a few deadlines, implying that his requests were unreasonable.  Their pushing away of him made him angry, and motivated him to continue. He was invited to speak to grad students at the iSchool, and he encouraged people in the class to fill out the forms and ask for the data because he wanted to see if multiple requests would really break the university’s ability to comply.  

As he spoke to me about this, he remembered feeling tenuous about the project.  He even received emails from supporters that suggested that they were being pressured about what he was doing, and that he might get pulled into meetings about it.

At the end of his time at UBC, the university switched their LMS from Blackboard to Canvas.  For his final online-only class, he chose not to agree to the terms of use for Canvas. That created tension between him and his instructors, who then had to email materials to him individually.  He also couldn’t engage in class discussions with his classmates, and in the end of course this impacted his ability to be successful in classes, and he didn’t get same experience that others were getting.

Bryan filed another FOI for UBC and Instructure, and didn’t get information on time to do anything with it as a Digital Tattoo employee.  The day he received the information was his last day at Digital Tattoo, and there could be no followup on his part.  

Bryan remembers hearing instructors and administrators say that the data collected would “help us help you!” but when he asked for evidence that the data collection actually helped struggling students, there was nothing.  There were, however, clear benefits for administrators wanting to manage and report on student activities.  

So let’s think about this, and ask the question:

Who gets to say no?

I read and hear versions of “We have all this data we should do something with it”  and “Help us help you” with no stories at all about students who were actually helped by massive data collection.  When questioned, many suggest they want this data because they are coming from a place of care.

At Trinity, there is a merged unit–IT and Library, and as such is a unit in charge of multiple systems that collect and store student and faculty data, 

And historically libraries didn’t keep all this data, because of concern about patron privacy and protection

The potential of the systems we have now to collect and surveill makes it easy to do market- imperative-driven things such as offer suggestions, create profiles, and there is plenty of pressure to do so.

How much agency gets surrendered to these systems, to the predictive algorithm?

Chris Gilliard (2016)  and Safiya Noble’s (2018) works provide two important cautions about the ways in which digital structures reproduce and amplify inequality.  Technology is not neutral, and the digital tools, platforms, and places with which we engage, online or off, are made by people, and informed by our societies, and all of the biases therein.

This, then  is an important educational consideration:  the tension between a “market forces” argument to use the data to predict and prescribe actions, vs. an approach that centers pedagogy, process, and potential, and resists prediction in favor of providing opportunities to see what might happen.

In my work as an anthropologist in libraries and universities I have contributed to physical space and web redesigns.  There’s been an interesting tension between “find problems and fix them” and “explore how people study/do research/teach/write”  I write about it with Andrew Asher in our article “Ethnographish.”  In particular, we point to the culture of libraries (and the nature of institutions generally) as resistant to open-ended work that doesn’t have a concrete problem to solve:

“Libraries are notoriously risk averse. This default conservative approach is made worse by anxiety and defensiveness around the role of libraries and pressures to demonstrate value. Within this larger context, where the value of libraries is already under question, open-ended, exploratory ethnographic work can feel risky.“

Lanclos and Asher (2016)

I think these are related, the tensions I am identifying here. The contrast between treating students as problems to be solved (via predictive analytics) and treating students as people engaging in complex processes within education, emerges from a similar place that generates the contrast between “problem-solving” and open-ended exploration of behavior.  These are different parts of the same conversation around “What is the role of education?”

“A college education, whether it is a night class in auto mechanics or a graduate degree in physics, has become an individual good.  This is in contrast to the way we once thought of higher (or post-secondary) education as a collective good, one that benefits society when people have the opportunity to develop their highest abilities through formal learning.” 

(Tressie McMillan Cottom, LowerEd 2017, p. 13)

Whether you think education is about people acquiring credentials (a commodity) or if it’s a collective good, important to society as a whole, will likely play a part in whether you think that people working in institutions should primarily problem-solve, or work in less transactional ways to gain insight.  

In a lot of design work I see the use of Personas, and there are some interesting issues around the use of personas and the extent to which they do or do not get directly reflected in designs.

What I have also  found in my own more recent work, as someone brought in to various higher and further education contexts to help people reflect on and develop their personal and professional practices, is that identity categories are quickly taken up by people.  

We are primed in a variety of ways by diagnostic tests and also “fun” internet quizzes to label themselves.  “I’m ENTJ” “I’m 40 but my social media age is 16” I’ve spent a lot of time in workshops trying to manage people’s anxieties around what they think these categories say about them as people.  They apologise for their practice, because they can read the judgements embedded in the labels–”capable” “novice” “1st year” “1st gen”

 We have people deciding that they were more or less capable depending on the label they felt fit them. 

Early in my time doing work in libraries, I was tasked with some web usability testing.  We generated tasklists, reported on the efficacy of web environments, etc. It was clear to me in the work that people didn’t sit down to a website and say “I’m a first year, and I’m using this website”  They sat down and said “I’m writing a paper, I need to find sources.” So I was perplexed at the use of personas in web UX, because in the course of my research I saw people making meaning of their encounters with the web environment based on what they wanted and needed to do, first and foremost–not who they were.  What I was told, when I asked, was that personas are useful to have in meetings where you need to prove that “users are people.”

(Sidenote–I’d rather start with “people” than “users” especially in a library context because your community includes  people who aren’t necessarily visible “users” of any of your spaces)

When UX workers use personas, to frame our testing of websites, we have capitulated to a system that is already disassociated from people, and all their human complexity.  The utility of personas is a symptom of the lack of control that libraries and librarians have over the systems they use. How absurd to have to make the argument that these websites and databases will be used by people.  The insidious effect of persona-based arguments is to further limit what we think people are likely to do as particular categories. Are first year students going to do research? Do undergraduates need to know about interlibrary lending?  Do members of academic staff need to know how to contact a librarians? Why or why not? If we had task-based organizing structures in our websites, it wouldn’t matter who was using them. It would matter far more what they are trying to do.  

So I have a problem, clearly, with using personas as a design principle for organizing your spaces around identity

I think it’s important to consider:   what are your systems and structures communicating to the people in your library about what is possible?  Is it organized around who you think they are?  

Or about what they can do?

One of those provides more room for choice and agency than the other

This is not to say that identity doesn’t matter–but what we want is for identity to come from, and inform how the students wants to work and what their work means.  We should not want for identity to be a controlling category that limits what is possible.

Who is to say that undergraduates don’t need similar kinds of access to website space that faculty do?  At some point both of them are writing, both of them are researching. The difference is in how deep a dive they do, not in the basic activities.

So, my advocacy would be for practice-based personas, if you are going to use them.  Why?

Because it provides space for agency.  

All year I have been giving talks that revolve around deCerteau’s distinction between kinds of agency, in particular tactical vs. strategic agency.  

I have mentioned refusal and we can use deCerteau’s framing to distinguish between tactical refusal, which comes from from a position of no power, and strategic refusal, which can be engaged in by people with power.    

Let’s think about our community members.–and here I will be indulging in a bit of personas

What does student agency look like? They can make choices.  But there are often constraints around those choices. It’s worth asking, for example, in the case of learning analytics, the extent to which a student could actually choose not to participate in the systems that harvest data, and still successfully navigate to their degree.  

Faculty have more institutional power than students, and sometimes more than non-faculty staff at universities and colleges, but they are themselves embedded in their own webs of power and influence, and don’t always get to be strategic.  For example, they technically have choices about when and where to publish, but there are tenure and promotion requirements that constrain their choices. Even if faculty value Open Access and all it stands for, if they want tenure might have to submit their work to journals that are closed and paywalled, because that is what success looks like in their discipline.

Faculty can also be limited in what and how they teach, as I witnessed when a junior faculty member at a university was discouraged from teaching in active learning classrooms because they “can’t teach as much content that way.” Regardless of that faculty member’s own perspective on teaching and effectiveness, they only had so much power to engage.  It’s also worth remembering that any faculty member who is not a cisgendered heterosexual white man is even more vulnerable, and in need of care.

This is all about power and culture as well as practice.  

So, what are people working in education, in IT and libraries,  to do?

Let’s think again about orienting to practice, rather than identity.  I find this useful not just as an anthropologist, but as someone concerned with social justice, and the ways that institutions can use identity to constrain and cap the potential that people have to do unexpected things.

Approaches to digital literacy can be similarly constraining–when we test people and put them in categories, that offers fewer options (and far less imagination) than assuming that everyone has a practice, but also everyone (faculty and students alike) upon arrival into an institution could use some information and help with How Things Are Done Here and What Is Possible.

So in an ideal world, libraries and educational IT (and the universities and colleges in which they are embedded) would recognize the range of practices involved in scholarship (reading, writing, processing, communicating, researching, testing, etc) and then also have the resources to configure places (digital and physical) where these things are not only possible, but those possibilities are signaled to their community members.

This is not the same thing as “freshmen go here”

This is about flexibility, and communication, and also the ability to let go of what people “should” be doing when they do scholarship.  While there are wrong ways to do it, there is also a spectrum of right ways, and much of that has to do with accommodating the ways that people need to fit being a scholar into the rest of their lives.

I want to point here to the work I got to do on the lives of commuter students.  In that project, we interviewed student-parents about their academic practices, and where they studied (and why) to gain insight into their lives.  We got to use this work to make an argument for a family-friendly study room in the library, and then evaluate the initial impact that room might have on the lives of students.  This wasn’t a project that was reacting to a “demand” for it–there wasn’t a sense among students before we started this project that this was work the library could do. In connecting with students, and listening to the stories of trying to carve time out to study in the course of their complex lives, we worked towards giving our students more choices.  This was the library Facilitating strategic agency : using the power that the library and education technology can have to create spaces for students to discover and engage in the kinds of practices that work well for them.

Open-ended ethnography can be a way to create space for us to imagine ways to allow agency in educational spaces.  Exploratory work that isn’t just about “solving problems” can lead to insight, and allow library and IT workers agency too, to go beyond instrumental approaches, to move away from purely tactical, reactive approaches, and to gain access to more strategic levels of iterative planning and decision making

Resources and Further Reading

Cottom, Tressie McMillan. Lower ed: The troubling rise of for-profit colleges in the new economy. The New Press,  2017.

Chris Gilliard and Hugh Kulik “Digital Redlining, Access and Privacy”  Privacy Blog, Common Sense Education, May 24, 2016, https://www.commonsense.org/education/privacy/blog/digital-redlining-access-privacy 

Lanclos, Donna, and Andrew D. Asher. “‘Ethnographish’: The State of the Ethnography in Libraries.” Weave: Journal of Library User Experience 1.5 (2016). https://quod.lib.umich.edu/w/weave/12535642.0001.503?view=text;rgn=main

de Certeau, Michel, and Steven Rendall. The Practice of Everyday Life. University of California Press, 2011.

Lanclos, Donna and Winterling, Rachael “Making Space in the Library for Student Parents” in  Academic Libraries for Commuter Students: Research-Based Strategies, Mariana Regalado and Maura A. Smale, eds. (33-51).  (Chicago: American Library Association) 2018. https://drive.google.com/file/d/0B-EwMBW4DXF1WUVrZEtYeUp0Y1BZbE5RTDBKakpzdWFqQ0Rn/view

Noble, Safiya Umoja. Algorithms of oppression: How search engines reinforce racism. NYU Press, 2018.

Gaining insight over fixing problems: how open ended research can teach us what we need to know

November sunset in Guelph

I was so pleased to be invited to the University of Guelph library by Karen Nicholson and Ali Versluis to give a talk and also to talk with people in the library about user experience and ethnographic research in library and education contexts. This was the last talk that I gave during my November Tour, and I think it came together the most solidly of the four (there’s something to be said for the repetition of experiences in getting things right, note to self). I would also like to thank Chris Gilliard for reading early drafts of this, and helping me clarify some of my argument. Thanks to Jason Davies for the Mary Douglas citation. And credit as well to Andrew Asher, who was my research partner in some of the work I talk about here.

I wrote this talk at my home, in what is now called North Carolina, in the settler-occupied land of the Catawba and Cherokee people.   I am a Cajun woman, and my people are a settler people from the Bayou Teche, on Chitimacha land in what is now called Louisiana.  

I want to  acknowledge here the Attawandaron people on whose traditional territory the University of Guelph stands and offer my respect to the neighboring Anishinaabe, Haudenosaunee and Métis.  

************************************

A few years ago, Andrew Asher and I were hired to do a project for an international non-profit that provides electronic resources to libraries in less well resourced countries.  The organization was aware that there were low use and high use institutions that they were providing resources for, and wanted to know why that difference was there.

So we interviewed people in Zambia, and in Kyrgyzstan, in places that this organization told us didn’t have connectivity issues.  While there might not have been connectivity issues on the university campuses, the practical experience of connectivity was not consistent, as people were not always on campus.  As researchers, we encountered this as a problem early on, for example not being able to use Skype for interviews because of connectivity problems. We ended up doing a mix of Skype to call mobile phones, and WhatsApp to conduct interviews in locations where the internet was not reliable for our participants. 

Among the things we found out, in the course of our research, was things like in Zambia,  people who wanted to have faster internet bought ISP “rabbits,” to gain access off campus. We interviewed a  PhD candidate in Engineering who made the point that unless you were on the university network (Eduroam), you could not use university materials (such as library resources).  Therefore, using the faster, more reliable (but more expensive) rabbit modems in Zambia locked students and staff out of their institutional resources.  

We interviewed a Lecturer in Education with similar issues, even though he was at a “high-use” institution.  It wasn’t that the subscriptions weren’t there, or the resources not theoretically available, but that connectivity made those resources less useful, as they were difficult to get to:

“Yes, like I was telling you, either you subscribe to some journal publisher and because of poor connectivity, you may not get access to those services.  So it’s basically attributed to poor connectivity. Not that the institution does not have the information, the information could be there but the connectivity limits us from getting access.  Cause the system gets to be slow.”

This scholar did point out that doesn’t happen too frequently, so he wasn’t going to complain too much about access.  But he highlighted what’s at stake when those failures happen: he can’t do his work.  

“Basically, I can just say that is it poor connectivity and when there’s poor connectivity and there’s something that I urgently need to confirm because like when I’m reading a journal article where somebody has cited somebody.  There are times when I actually need to read the other article or if it’s a book which they refer to so I’ll probably have to go online to download and if there is not connectivity then that becomes a problem.”

Our research revealed that use of resources (or lack thereof) wasn’t just about connectivity, it was also about culture, and the separation that scholars experienced from the people working in the library.  One librarian we spoke to made it clear that the levels of authentication that scholars found burdensome were there on purpose to make sure that only the right people could have access to them. That, however, translated to even the “right people” using those resources less, or not at all, preferring to spend their precious internet time on getting to resources that were more easily accessible, even if not institutionally provided.

In Kyrgyzstan, one scholar assumed that because the physical collection in the library was out of date and inadequate, the electronic resources would be, too.

So, scholars in these two countries, in both “high” and “low” use institutions according to the non-profit, acquired and shared resources via printing, email, and thumb drives more often (and more reliably) than getting resources online via the resources paid for and provided by the organization.  

The implications we drew out were as follows:

  • Providing materials “online” is not the same as providing “access” when the internet is not a sure thing.  Also, having a connection is not the same thing as being connected enough to make using online resources a feasible option. There are many barriers to accessing library materials that are outside of the library’s own systems and infrastructure.  
  • Scholars find what they need, and what is accessible–if they Google something and it’s closed-access, they move on until they find something they can use.  The existence of the materials does not necessarily translate into its use.  
  • The disconnect of the library from the research workflow of the scholars interviewed here was striking, especially in the context of their awareness for the need for training, and knowledge about how to better navigate useful resources.  For example, one Lecturer in Education was at her current institution for 4 years before she knew about electronic resources, and then it wasn’t until she had started her PhD studies at another institution.

And our recommendations were things like:  pay attention to physical infrastructure when you offer online resources to institutions.  Consider offering resources in digital forms that aren’t just online. Think about facilitating more networking and connections between the people in the library and their surrounding community of scholars.  Basically, we told them context matters, and that the non-profit, in providing online resources, was operating as if they were in a vacuum.

Our report had to do with infrastructure, economics, and the lives of the scholars (faculty and students)–The non-profit wanted a problem to fix, and in many ways that was reasonable–it cost money for them to provide these resources, and wanted to avoid wasting resources.  What we as researchers presented them with was an exploration of the contexts in which the people they were trying to help (via libraries) were restricted in what was or wasn’t possible.  

We did not provide them with a quick-fix solution.  In many ways, the questions they wanted to ask were inevitably going to have disappointing answers.  

And well, the qualitative work we did wasn’t satisfying, short-term, but I think it’s important nonetheless.

Why was our research unsatisfying? Well, to some extent, the reason is the culture of libraries.  

I will point again to the article “Ethnographish” that Andrew and I wrote.   We wrote it in a moment, several years into our collective work as anthropologists working in libraries, where we wanted to try to think critically about why the work we were doing looked the way it did.  And also why particular kinds of work (especially open-ended exploratory ethnography) was so hard for us to do.  

Our argument is:  open-ended exploratory research is a hard sell in libraries.  We see UX research not just because it’s useful, but because it’s finite, and in particular because it’s proposing to solve specific problems.

“Libraries are notoriously risk averse. This default conservative approach is made worse by anxiety and defensiveness around the role of libraries and pressures to demonstrate value. Within this larger context, where the value of libraries is already under question, open-ended, exploratory ethnographic work can feel risky.“ (Lanclos and Asher 2016)

I think that in positioning themselves as problem-solvers, libraries and library workers are positioning themselves in a tactical way.  DeCerteau’s distinction here between kinds of agency (tactical vs. strategy) is useful here, helping us think about the kinds of actors who are allowed choices given their structural position.  To what extend to libraries and library workers get to make decisions that aren’t just tactical, not just reactions to situations? How and when do libraries and library workers get to make strategic decisions?  Because that has to be more than just responding to demands and solving problems.  

A while ago I gave a talk at a CUNY event that advocated for the mixed-methods library.  Lots of assessment departments talk about (and some do) both qualitative and quantitative (though I still stand by my impression that a lot of qualitative stuff is UX-style “what is the problem” approaches.).  I gave that talk in 2014, and at the time, part of what I was pointing to was the need to get insights that numbers would not give us.  

For example, I worked with a university that participated in the Measuring Information Service Outcomes survey.  Some of the bar charts we can generate from this data look like this:

 We have all of these numbers, what do they mean?  What does “satisfied with the library” mean, anyway?  Can graphs like these tell us anything?  

In that talk 2014 I actually said “I don’t[ want to get rid of quantitative measures in libraries” but now in 2019 (and actually, way earlier than that) I decided it wasn’t my job to advocate for quantitative anything, and not just because lots of other people are already advocating for that.

Because now in 2019,  quantification and problem fixing orientations have landed us with learning analytics, and library analytics, and I think there’s a lot more at stake than “these bar charts don’t tell us enough” (which was bad enough).  We have arrived here in part because somewhere along the way arguments accompanied by numbers were interpreted as Most Persuasive (I think we get to thank Economists, as a discipline, for this, given their infiltration into popular news media as commentators).  

Being able to categorize people also feels like a constructive action, a first step towards knowing how to “help” people (and categories are certainly central to particular practices in librarianship, and yeah they come with their own troubled history, as anyone who’s read critical work on LOC or Dewey systems will attest).  

So let’s think about the impact of categorizing and quantifying academic work, including the work of libraries.  Let’s think about what we are doing when we put people into categories, and then make decisions about capability based on that.  And yeah. Pop culture quizzes, and even sometimes those management personality tests can be fun.

Where it all ceases to be fun is when decisions get made on your behalf based on the results.

Frameworks and quizzes and diagnostics (what I like to call the “Cosmo Quiz” school of professional development) are often deployed with the result that people decide what “type” they are to explain why they are doing things.  Pointing to individual “types” and motivations provides an easy end-run around organizational, structural, cultural circumstances that might also be the reasons for practice. Because then when there are problems, it is up to the individual to “fix it”

What are we doing when we encourage people to diagnose themselves, categorize themselves with these tools?  The underlying message is that they are a problem needing to be fixed (fixes to be determined after the results of the questionnaire are in)

The message is that who they are determines how capable they are.  The message is that there might be limits on their capabilities, based on who they are

The message is that we need to spend labor determining who people are before we offer them help.  Such messages work to limit and contain people, rather than making it easy for people to access the resources they need, and allow themselves to define themselves, for their identity to emerge from their practice, from their own definitions of self.

When UX workers use personas (another way of categorizing people) to frame our testing of websites, we have capitulated to a system that is already disassociated from people, and all their human complexity.  The insidious effect of persona-based arguments is to further limit what we think people are likely to do as particular categories. Are first year students going to do research? Do undergraduates need to know about interlibrary lending?  Do members of academic staff need to know how to contact a librarians?  Why or why not? If we had task-based organizing structures in our websites, it wouldn’t matter who was using them.  It would matter far more what they are trying to do.  

I am informed in this part of my argument by  anthropologist Mary Douglas on How Institutions Think, and in particular that institutions are socially and culturally constructed, and that they themselves structure knowledge and identity.  Douglas’ work allows us to think of personas and other kinds of personality test-categories as “patterns of authority”, not just ways of trying to make things clear, but as ways of reifying current structural inequalities, and categories that limit people and their potential. When institutions do the classifying the resulting patterns are authoritative ones, the profiles that suggest plans of action come at the expense of  individual agency, and implies that the institutional take on identity is the definitive one that determines future “success.” 

What are the connotations of the word “profile?”  If you have a “profile” that is something that suggests that people know who you are and are predicting your behavior.  We “profile” criminals. We “profile” suspects. People are unjustly “profiled” at border crossings because of the color of their skin, their accent, their dress. 

“Profiles” are the bread and butter of what Chris Gillard has called “digital redlining:” ”a set of education policies, investment decisions, and IT practices that actively create and maintain class boundaries through strictures that discriminate against specific groups.“  His work is at “the intersections of algorithmic filtering, broadband access, privacy, and surveillance, and how choices made at these intersections often combine to wall off information and limit opportunities for students.”

“Now, the task is to recognize how digital redlining is integrated into technologies, and especially education technologies, to produce the same kinds of discriminatory results. (Gilliard and Culik 2016) “

Chris gave in his recent Educause talk some examples of what he calls “EdTech WTF moments”

  • “Facemetrics tracks kids’ tablet use. Through the camera, patented technologies follow the kids’ eyes and determine if the child is reading, how carefully they are reading, and if they are tired. “You missed some paragraphs,” the application might suggest.
  • In a promotional video from BrainCo, Students sit at desks wearing electronic headbands that report EEG data back to a teacher’s dashboard, and that information purports to measure students’ attention levels. The video’s narrator explains: “School administrators can use big data analysis to determine when students are better able to concentrate (Gilliard 2019).”

One problem is that it’s possible to extract quantified behavioural data from systems, in a context (e.g., libraries) where quantified data is perceived as most persuasive 

What gets lost in quantification is not just the Why and How (quantification is really good with the What, and occasionally Where), but also the privacy, safety, and dignity of the people whose data you are extracting.  This is a “just because you can doesn’t mean you should” situation, especially when we consider our responsibility to people who are already over-surveilled, hypervisible, and structurally vulnerable (i.e., Black, brown, and Indigenous people)

Let’s look at this Guardian article, on student surveillance, and here I’m guided again by Chris Gilliard’s deep dive on this article

https://twitter.com/hypervisible/status/1186625050184732672

Basically, companies like Bark and Gaggle are using school worries about liability around school shootings and student suicides and bullying as a lever by which they gain access to the schools.  They sell “security” when what they are actually peddling is “surveillance.”  

In this article none of the concerned parties are talking about gun control, or human systems of care that can deal with mental health issues, address discrimination against LGBTQ+ kids, racial bias, and so on.  The companies are selling results that are not borne out by the research they hand wave towards. They are counting on people being too scared not to engage with these systems, because they feel helpless

(sound familiar?)

Read the damn thing yourself too, it’s terrifying to me: https://www.theguardian.com/world/2019/oct/22/school-student-surveillance-bark-gaggle

And of course It gets worse–as I was writing this talk a bill was introduced by US Republican senators to make school engagement with this tech (and these tech companies) MANDATORY.

Thanks to Chris Gilliard and his work, I am also aware of Simone Browne’s work Dark Matters: on the Surveillance of Blackness.  In this book, she writes a black feminist, critical race studies informed take on surveillance studies.  She points particularly to the history of surveillance technology as being one that emerges from the white supremacist  need to police black people, black bodies. Her examples include enslavement trading practices of the 1800s, the tracking and control of enslaved people via paper permits and laws about carrying lanterns after dark, and she makes it clear that this history is relevant to current discussions of how we make people visible, in what circumstances, and why.  We cannot disentangle race and inequality from our discussions of these technologies, nor should we try to in a quest for “neutrality” or “objectivity.”

The surveilling gaze is institutionally white, and furthermore, as Browne demonstrates in her book, that the technologies and practices of surveillance have a deep history in the colonization and enslavement of black and indigenous people.  The history of current surveillance practices involves the production and policing of racialized categories of people, in particular blackness and black people, so that they can be controlled and exploited.  

We need to think too about the racist context in which data is generated and collected, as in the case with health care data used to generate algorithms intended to guide health care decisions.   In Ruha Benjamin’s perspective piece in that same issue of Science, she notes that researchers “found that because the tool was designed to predict the cost of care as a proxy for health needs, Black patients with the same risk score as White patients tend to be much sicker, because providers spend much less on their care overall. “

While surveillance and tracking are clearly forms of control, and the use of algorithms is a problem, their use is often framed as care (again, see the people interviewed and quoted in the Guardian article, and this is an argument I hear in library contexts too, “we need the data to care for students and faculty.”)

Insisting that people have to participate in systems that harvest their data to have access to education or health care is a kind of predatory inclusion.  

“Predatory inclusion refers to a process whereby members of a marginalized group are provided with access to a good, service, or opportunity from which they have historically been excluded but under conditions that jeopardize the benefits of access. Indeed, processes of predatory inclusion are often presented as providing marginalized individuals with opportunities for social and economic progress. In the long term, however, predatory inclusion reproduces inequality and insecurity for some while allowing already dominant social actors to derive significant profits (Seamster 2017).”

When people become aware that they are under surveillance, there can be a ”chilling effect” where they do not engage with the system at all.  This is refusal, not engaging with the system because of wariness of what might happen if they do.  We need to consider carefully the disparate effect some of these methods of surveillance may have on trans students, undocumented students, and other vulnerable populations.  

Our role as educators, as workers within education, should be to remove barriers for our students and faculty (and ourselves), not give them more.

We also need to think critically about whether the systems we are extracting data from accurately reflect the behaviors we are interested in.  For example, borrowing histories, swipe card activity records, and attendance tracking are all proxies for behaviors, not direct observations, and not necessarily accurate representations of behaviors (even as they might seem precise, and make us feel good about our precision biases).

And if you are worried about “How will we know…X” please do not assume that these systems are the only way.  Because the vendors selling these systems that collect this problematic data want you to THINK that it’s the best and only way to find things out.  But that is not true.  

The fight against quantification, pigeonholing, surveillance and tracking should include qualitative research engagement –like the stuff that I do, like the stuff I try to write about and train people to do, and encourage them to try–engagement with the people from whom we want to learn, and with whom we want to work.  I would even suggest that the lack of “scalability” of qualitative methods is a benefit, if what we want is to be able to push back against surveillance and automated systems.

It’s about more than being able to be strategic on behalf of libraries and library workers, but also being able to create space for students and faculty to be strategic, to exercise power and agency in a context that increasingly wants to remove that, and put people at the mercy of algorithms.  This is particularly dangerous for already vulnerable people–Black and brown, Indigenous, women, LGBTQ+ people. Exploratory ethnographic approaches, engaging with people as people (not as data points) gives us not just more access to the whys and hows of what they are doing, but can work to connect us with them, to build relationships, so that we don’t have to wonder for long “why are they doing that.”  Then we won’t have to listen to people who rely on machines and their broken proxies for human behavior and motivations.  

Further Reading and Resources

LIBRARY TRENDS, Vol. 68, No. 1, 2019 (“Learning Analytics and the Academic Library: Critical Questions about Real and Possible Futures,” edited by Kyle M. L. Jones), © 2019 The Board of Trustees, University of Illinois

Benjamin, Ruha, “Assessing risk, automating racism,”  Science 25 Oct 2019: Vol. 366, Issue 6464, pp. 421-422.  DOI: 10.1126/science.aaz3873 https://science.sciencemag.org/content/366/6464/421.full

Browne, Simone. Dark matters: On the surveillance of blackness. Duke University Press, 2015.

de Certeau, Michel, and Steven Rendall. The Practice of Everyday Life. University of California Press, 2011.

Douglas, Mary. How institutions think. Syracuse University Press, 1986.

Gilliard, Chris “Digital Redlining”  featured session, EDUCAUSE conference, Chicago, October 16, 2019.  https://events.educause.edu/annual-conference/2019/agenda/digital-redlining

Gilliard, Chris and Hugh Culik “Digital Redlining, Access and Privacy”  Privacy Blog, Common Sense Education, May 24, 2016, https://www.commonsense.org/education/privacy/blog/digital-redlining-access-privacy 

Lanclos, Donna, and Andrew D. Asher. “‘Ethnographish’: The State of the Ethnography in Libraries.” Weave: Journal of Library User Experience 1.5 (2016).  https://quod.lib.umich.edu/w/weave/12535642.0001.503?view=text;rgn=main

Obermeyer, Ziad, and Sendhil Mullainathan. “Dissecting Racial Bias in an Algorithm that Guides Health Decisions for 70 Million People.” Proceedings of the Conference on Fairness, Accountability, and Transparency. ACM, 2019. https://science.sciencemag.org/content/366/6464/447 

Safiya Umoja Noble. Algorithms of Oppression: How search engines reinforce racism. NYU Press, 2018.

Seamster, Louise, and Raphaël Charron-Chénier. “Predatory inclusion and education debt: Rethinking the racial wealth gap.” Social Currents 4.3 (2017): 199-207. https://journals.sagepub.com/doi/abs/10.1177/2329496516686620?journalCode=scua

Watters, Audrey. (2014) “Ed-tech’s Monsters”  Hack education, Sept 3, http://hackeducation.com/2014/09/03/monsters-altc2014