Reflection 6
The last few weeks of the course felt heavier than some of the earlier ones. Not in a bad way, just in a way where the course started asking bigger and deeper questions.
Gone is speaking of “how do I use this platform” anymore, but more like who is this platform actually serving, what is it taking from me, and who gets pushed out of these systems even when they are advertised as efficient or modern or helpful. I think that is why this stretch of the course felt more serious and honestly extremely interesting to me.
The Bonnie Stewart talk was probably the clearest example of that shift. What stuck with me most was her explanation that datafication is not some abstract background process. It is happening basically every time we touch a device. Typing, clicking, deleting, scrolling, all of it leaves traces, and those traces become patterns that can be monetized, sold, and used to shape behaviour. That idea sounds dramatic at first, but honestly it also just feels true. A lot of digital systems act like they are just there to help us, when really they are collecting way more than most of us think about in daily life.
I also liked that she pushed back on the idea that data-driven systems are automatically more rational or neutral. That is one of those assumptions that sounds smart until you actually look at how these systems work in real life. If a system is built badly, or built around biased assumptions, then making it digital does not fix that. It can actually make it worse and harder to challenge because now it looks objective. Her point about systems penalizing poor and marginalized people more harshly was one of the biggest things I took away. Same with facial recognition not working equally for everyone. That kind of thing makes it obvious that digital systems are social and political to a large extent.
There was also something kind of unsettling in the way she described how normal all of this has become. Like using Gmail, Google systems, big platforms, constant tracking, all of it starts to feel so standard that you stop questioning it. I do that too, I think most people do. It is easier to just accept convenience and keep moving. But the course kept pushing the idea that digital literacy also means being able to notice the trade-off, even if you cannot fully escape it.
The Nodin Cutfeet talk added a different layer to that. I thought the discussion of Indigenous digital literacies was really important because it challenged the idea that digital literacy should look the same for everyone. A lot of tech education gets framed in a very specific way, like success means becoming the next startup founder or learning through systems built around mainstream assumptions. What stood out to me from the talk was that those assumptions do not always fit the communities they are supposedly trying to serve. Nodin talked about shifting toward community building and self-expression instead of forcing tech learning into one career-focused mold, and I thought that was a way more thoughtful approach.
The points about data sovereignty and context mattered a lot too. I liked that this part of the course did not treat openness as automatically good in every single case. That would be too simple. Some knowledge is relational, community-based, and not meant to be detached from its context and circulated like it belongs equally to everybody. That was one of the strongest ideas in this whole section for me. It made me think more carefully about who has the right to share certain knowledge, and who gets harmed when everything is treated like neutral content floating around online.
Then the Maha Bali talk pushed the equity side even further. I think her framework was useful because she made it clear that digital inequity is not just about whether someone has a laptop or internet connection. It can also be economic, cultural, and political. That feels way more accurate. I really liked her point that including people after the table is already set is not the same as actually designing something with them.
That connects to one of the examples she gave about averages and designing for some imaginary “normal” person. I thought that was such a good way to explain inequity. If systems are built around one average user, then a lot of real people are going to be treated like exceptions or problems. And that is where exclusion starts feeling normal, because the design itself already assumed a narrower kind of user.
I think what I am taking from these weeks is that digital literacy gets a lot more meaningful once it stops being framed as just skill. Skill matters, obviously, but it is not enough. The bigger questions are who benefits, who gets watched, who gets represented badly, and who gets left out while the system still claims to be for everyone.