AI in Higher Education & Technologies of Control

Hello and welcome to the latest edition of my newsletter, Poiesis. This newsletter is where I share my research and practice relating to society and technology — AI, misinformation, surveillance, ethics, and more. It’s my way to help you understand and change the rapidly changing world of social technology.

This edition centers around the growing challenges of AI and education. I work in higher education as a postdoctoral researcher (hopefully someday as a professor), and I have pivoted some of my research to focus on AI and education. There is so much to discuss on the topic, but this newsletter will focus on AI’s role in control over education and information.

But first, a bit about my goals

I decided to take some time recently to address my recent efforts to create more content on social media and spread information about my research and practice more broadly — including in this newsletter.

Instagram Reel

I spent the last five years or so working to earn my PhD jointly in computer science and cognitive science. I originally applied to grad school hoping to research AI and ethics, and was accepted into a lab that somewhat did that work. Through twists and turns, I ended up spending my PhD years researching a variety of topics: robotics and ethics, AI, misinformation, surveillance, and learning some sociotechnical frameworks along the way.

The original reason I wanted to earn a PhD was to try to make the world a better place through scholarly work. I looked up to so many academics whose books I read, or talks I watched, whom I viewed as having shaped the world through their ideas. I worked hard to try to replicate what they did — developing critical frameworks, spending time deeply analyzing the world, spreading those ideas through academia. What I learned as I progressed in my PhD is that this is incredibly difficult and academia is its own isolated institution where a lot of very useful knowledge becomes locked away.

During my time working on my PhD, I also got deeply involved in some major social movements for change: Sunrise Movement, Peace Action, and Dissenters. Then, during my last few years at Tufts, I worked with some friends to organize our colleagues into a graduate student labor union so we could collectively fight for a better experience in the university. Through working with these organizations, I learned fascinating ways to think about changing the world to become a better place — things that I never even heard about in school, the workplace, or in broader culture.

As I’m now figuring out what life looks like post-PhD and how I want to contribute to the world, I feel drawn to working hard to synthesize these two pursuits: contributing to the world of ideas through critical understandings of our world, and sharing and using ideas and practices from social movements so we can achieve some of those visions.

But the professional world of academia can be very insular and exclusive, so I wanted to start sharing out useful knowledge, ideas, and analyses through social media to engage with a broader audience. That’s why I’ve been dedicating so much time to making videos, social media posts, and writing this newsletter — to try to bring critical sociotechnical thinking and ways improve our world to anyone who wants to engage with them.

AI in Education & Concerns of Control

Not too long ago, I wrote an op-ed for Truthout describing the push to spread AI technologies into schools by the Trump administration and Big Tech companies. I recently summarized it into a Reel on Instagram to share out the main points of what I had been uncovering.

Instagram Reel

Those who are pushing for AI technology in education are putting forward visions where AI is used for almost every classroom function. OpenAI, Google, and Magic School have language on their websites arguing that their tools can be used to do almost every core responsibility of a teacher: create lesson plans, quizzes, tests, tutor and converse with students. Even academics who are sympathetic to the technology, such as those convened for Stanford’s AI Education Summit, argued that AI could “provide real-time feedback and suggestions to teachers (e.g., questions to ask in the class” and “summarize the classroom dynamics… includ[ing] student speaking time.”

These scenarios that are being imagined by Big Tech and sympathetic academics are incredibly dangerous, primarily due to the level of control that they give to technological systems and the companies or government that controls them. In far-right political movements throughout history, whether Mussolini’s Italy or Stalin’s Soviet Union, there has been a desire from authoritarian leaders to control information and education as a means of achieving obedience. It has been a struggle of true movements for democracy to argue for critical thinking rather than thought control. In her work The Age of Surveillance Capitalism, Shoshana Zuboff argues that data-driven technologies are inching towards being able to enact the visions of authoritarians of the past, albeit with far less need to coerce human beings to do it.

AI is the perfect tool for any authoritarian to enact a key piece of totalitarian regimes: educational and media control. If it were being used by teachers for creating lessons, interacting with students, and evaluating students; and if it was used widely by students for their own inquiry and studying, it would have enormous control over what they think. It would then be easy for a corporation controlling the technology, or a government in partnership with them, to shape the AI models to push students and teachers in a certain direction.

In fact, this hypothetical is not so far from reality. The Trump administration is making enormous efforts to push AI into education, apportioning massive amounts of money for teachers to help students develop AI skills and for educational institutions to adopt AI-first approaches to teaching. But at the same time, the administration is pushing for AI systems to embody far-right views. Their AI Action Plan declares a desire to shape AI models to be “free of ideological bias,” which is obviously a cover to simply insert their own ideological bias.

We are entering a horrifying era where control of thought could reach unprecedented levels because of emerging AI technologies. I hope to spend some time developing practices and frameworks that can also be used to challenge this control.

Read more if you’re interested in the topic:

Emerging Challenges of AI in Higher Education

At the same time as critically reflecting on the sociopolitics of AI in education, I have been working at my own institution, California State University, Los Angeles, to reflect in community with other faculty and instructors about how AI is affecting their lives. Earlier this year, the California State University system announced that they were spending $17 million on licenses for ChatGPT so students, staff, and faculty could have access to the educational version of the tool.

As a result, they also distributed internal funding for projects within the CSU schools to advance integration of AI into the schools. Several universities were awarded funds, and Cal State LA was one of them. The lead of the team heading my own postdoctoral project, Eco-STEM, was a grant recipient, proposing an integration of AI tools into Supplemental Instruction — a peer-mentoring program that allows students to take extra workshops to help them reinforce concepts from classes alongside students who already took the class.

As part of this internal grant, the team has also set up critical reflection sessions around AI for faculty, which we call the AI Learning Community. During the sessions, we have reflected, as faculty and instructors in the engineering college, on questions such as: How does AI affect student learning? How are we using AI for our own course preparation? What is the current reality of engineers using AI in the workplace? As we reflect on these questions, we try to hash out the good and the bad, as well as come up with ways to mitigate potential harms.

To speak to the previous topic, one theme that has come up during our Learning Community is that instructors are very frustrated with AI technologies. They are making the classroom a more difficult place to manage, as students are using AI on their homeworks, for projects, and some instructors are noticing students not learning because they lean on AI so much. This is frustrating because nobody asked for this. The group is feeling left behind as the CSU spent so much money on licenses without spending equal money on supporting curriculum and pedagogy in this new unprecedented time.

There is much more to share back in the future, as the team plans on writing about our findings through the Learning Community as part of a research project. Stay tuned for more on that front.

This newsletter provides you with critical information about technology, democracy, militarism, climate and more — vetted by someone who’s been trained both as a scholar and community organizer.

Use this information to contribute to your own building of democracy and fighting against technological domination! And share it with those who would be interested.

Until next time 📣

Keep Reading

No posts found