Reflection 4
Week 6 shifted into AI, and that felt connected to the previous week in a different but still related way.
What I appreciated most was that the course did not treat AI as either completely amazing or completely evil. It felt more balanced than that. Lucas Wright’s talk on artificial intelligence showed a bunch of practical ways AI can be used, especially for workflow stuff, organization, and generating ideas faster. I can see why people find that useful. Honestly, I would be lying if I said I do not see the appeal. A lot of the examples made sense, especially for saving time on repetitive tasks.
But I think Valerie’s responses in that conversation were what made me reflect more deeply. She kept coming back to the concern that if AI starts doing too much, then your own voice, judgment, and thinking can start getting flattened. That part really stood out to me. She talked about the “vanilla” feeling of AI-generated writing, and I knew exactly what she meant. Even when AI sounds polished, it can sound weirdly empty at the same time. Like it says things smoothly but without much actual perspective behind them. That matters a lot in a university setting, because if the tool starts shaping the direction of your work too much, then at some point it stops really being your thinking.
The course also made a good point about academic integrity not just being about obvious cheating. It is also about whether you are still doing the intellectual work yourself. That is why the warning about not relying on AI too heavily for research was important. The Week 6 materials included the note that AI should not be trusted to handle literature review work on its own because it misses too much of the research out there. That was a good reminder that something being fast or convenient does not mean it is actually good enough for serious academic work.
The environmental side of AI also made me pause more than I expected. I feel like people talk about AI mostly in terms of productivity, jobs, cheating, or the future of work, but not as much about the environmental cost. I liked that this week did not ignore that. Even if someone is excited about AI, I think digital literacy should include being aware of the material consequences too, not just the flashy benefits.
Overall, this week made me think about digital literacy in a more careful way. AI can definitely be useful, but that does not mean it should replace thought, judgment, or your own voice. I think that is the balance I took from this week most. Digital tools can help, but they should not be doing the thinking for you.