
I am teaching a course on scientific computations this semester, and in the first part, I introduce Schelling’s Model of social segregation. This allows students without any coding background to gain hands-on experience with a computer program (MATLAB) before we delve into more complex material later in the course. I plan to code the “Game of Life" with my students during one lecture. Since the code structures of the social segregation model and the game are pretty similar, I believe providing some basic code snippets would be beneficial for them to start with.
After about 20 minutes, we successfully developed a simple MATLAB program to run the game, and everyone was excited by the achievement. At the end of the lecture, I demonstrated the capabilities of Generative AI by asking it to produce MATLAB code for simulating Schelling’s Model of social segregation. The students were astounded to see how quickly the AI generated the code. However, when we copied and pasted this code into MATLAB, I pointed out that some versions of the output contained errors—either MATLAB displayed error messages or the distributions of the two types of agents failed to move correctly. The key takeaway from that lecture was, “Don’t blindly trust what you get from AI." Nonetheless, the code generated was nearly correct and can be considered a valuable building block for the final version.
For the assessment scheme, I require students to complete a mini-project centered around Schelling’s Model. They will begin by creating a simulation of the model, allowing users to adjust several parameters, such as the percentage of different types of agents, the amount of empty space, the satisfaction level, and the maximum number of iterations. Following the development of their code, students will conduct related investigations to deepen their understanding of the model’s implications. Based on their findings, they will comprehensively summarize the results and prepare a four-page report. This project will reinforce their coding skills and enhance their ability to analyze and communicate complex concepts effectively.
Some investigations are quite intriguing, while many projects tend to be straightforward extensions of the model. Students have explored various factors, such as the impact of the critical satisfaction level, the relationship between the number of iterations and board size, and the percentage of empty space. After grading all the reports, I realized I needed to guide students in generating more innovative ideas for their projects. To achieve this, I considered collaborating with GenAI to brainstorm better concepts for the mini-project. I talked to GPT-4o-mini, which I found affordable (at 15 points per message) and likely powerful enough for most coursework. I asked, “I need to do a math project on the Schelling model for social segregation. First, create a simulation of Schelling’s model. Users should be able to set several parameters of the model, including the percentage of different types of agents and empty space, satisfactory level, and the maximum number of iterations. Then, perform related research based on the code developed. Suggest some possible project ideas that I could explore."
In response, GenAI provided a pseudo-code outline and eight different investigation ideas. After reviewing these concepts, I found many to be pretty standard, but one particularly caught my attention: exploring the network aspect of the model. Since I’m unfamiliar with coding network problems, I thought this would be a valuable opportunity to learn more about its applications. I asked GenAI for more details and a MATLAB program tailored to this application.
The initial version of the program, however, was not perfect. It lacked a visualization function, making debugging challenging since I couldn’t see what occurred during each network update. After updating the program, I encountered some bugs in the visualization function, which couldn’t resolve several other issues. I suggested a possible solution, and GenAI was able to fix the bug. However, when I tested it in a simple setting, I could see the network but noticed that the nodes weren’t reconnecting correctly. This indicated that there were likely more bugs present. At this point, the program was relatively short—around 100 lines of MATLAB code. Despite my efforts, I struggled to resolve the issues with GPT’s assistance. Eventually, I reviewed the program myself and discovered why the network wasn’t updating during iterations: the local variables were not being updated in the main code. Additionally, it felt odd to introduce empty nodes into the network model. After about an hour of adjustments, I created something more reasonable. I also collaborated with AI to compute the average satisfaction level of the network (it seems that the AI’s interpretation of this metric doesn’t fully align with my own definition). I still wasn’t satisfied with the definition of empty nodes, how the code reconnects unsatisfied nodes to the rest of the network, and that there was only one trial in the simulation. For those interested in the entire conversation, it can be found here: GenAIConversation_SchellingModel. The resulting programs are available here: GenAI_SchellingModel.zip.
I believe this can serve as a solid foundation for an interesting mini-project. The generated ideas provide a starting point, even if the ultimate goal hasn’t been fully achieved. This framework allows students to continue building upon it. If those eight ideas don’t spark enough creativity, we can always engage further with the AI before requesting code. It’s crucial to remember that we must actively contribute to the conversation. We aim to collaborate with AI rather than depend on it to complete the project. The quality of the AI’s output is closely tied to how we interact, so our input and engagement are essential for maximizing its potential.