In the problem solving first year seminar class, we are beginning our discussion on AI/Machine Learning.
Step 1: Reading : Dr Kelly Swartz from English recommended that we read The Perfect Match by Liu. At least it was in her reading for her Science Fiction semester class. Its a long, but interesting story with a few curve balls. Maybe there is nothing we can do but go towards AI for better or worse? Likely worse. But who knows?
Step 2: Discussion : We discussed the reading and what our initial thoughts on AI are. It wasn’t my best professor moment, but people were talking a little bit. And just about everyone in the class filled out the open ended survey questions.
Step 3: Using GPT : They have just measured the speed of light in the lab. Now they have to use ChatGPT to help them measure the speed of sound – which they have to measure with stuff around the house. Oh and this week’s blog post also has to be generated by ChatGPT.
in the future
Step 4: Residual : Learn about the concept of what the Residual is in fitting curves. Why its important and what it means.
Step 5: Non-linear Curve Fitting in Excel by hand
Step 6: Using the curve fitting ideas, and their model of population growth from last week, to make predictions on population growth of the world. We are going to keep everything 1D and number based obviously.
Any other ideas out there? Does anyone have a nice simple example on how optimization works on a 10D case?
My personal take: I think AI in education is generally a bad thing. It dulls our senses and our problem solving abilities. In the 1990s I felt the same way about calculators. I have avoided using calculators as much as possible. On exams as an undergraduate, I tried to do the math by hand as much as possible and to make make approximations when appropreiate. But at the same time, I knew how to use my calculator – espeically how to use computer simulations to do calculations emasse. It is important to be comfortable solving problems with and with out the technology. Yes, a calculator allows us to solve big problems, and a computer allows even bigger problems, AI will let us solve HUGE problems quicker, but if you don’t understand the problem you are trying to solve, its all kind of meaningless. But in general, AI is a skill. We need to know how to use it.
Experimental physicists as a whole are a wacky bunch. We tend to build all our experiments from scratch and really understand what each part of the experiment is doing. This building up of the experiment allows us to understand what the real limitations of our tools are. We might push our models too far if we don’t understand how they work. This might seem like unimportant, but if you are using a model to say predict how many people you need to feed, errors can have real consquences.
As such I think there is a real need to understand how to solve problems (write essays, etc) without ChatGPT, how to solve problems (write essays, etc) with ChatGPT, and how ChatGPT works.
By the way, I am no fan of AI, but it is a skill and to be competive in today’s world you need to know it.
And of course I have to show a picture of the AI generated Piggie. My kid – who is a good artist – hates AI generated art because it takes jobs away from artists. While I 100% agree with them, ultimately my opinion isn’t likely to matter as the AI train is coming whether we like it or not.





Leave a Reply