I have been thinking about this and it seems like it's an asset that students want to do as little work as possible to get course credits. They also love playing games of various sorts. So instead of killing trees, printing pages of materials out and having students pay substantial sums to the printing press so we can inject distance between students reading the material and ChatGPT, why not turn it around completely?
1. Instead of putting up all sorts of barriers between students and ChatGPT, have students explicitly use ChatGPT to complete the homework
2. Then compare the diversity in the ChatGPT output
3. If the ChatGPT output is extremely similar, then the game is to critique that ChatGPT output, find out gaps in ChatGPT's work, insights it missed and what it could have done better
4.If the ChatGPT output is diverse, how do we figure out which is better? What caused the diversity? Are all the outputs accurate or are there errors in some?
Similarly, when it comes to coding, instead of worrying that ChatGPT can zero shot quicksort and memcpy perfectly, why not game it:
1. Write some test cases that could make that specific implementation of `quicksort` or `memcpy` fail
2. Could we design the input data such that quicksort hits its worst case runtime?
3. Is there an algorithm that would sort faster than quicksort for that specific input?
4. Could there be architectures where the assumptions that make quicksort "quick", fail to hold true? Instead, something simpler and worse on paper like a "cache aware sort" actually work faster in practice than quicksort?
I have multiple paragraphs more of thought on this topic but will leave it at this for now to calibrate if my thoughts are in the minority