For official courses, we go over the generated course with the creator to vet the content. Generally they're pretty impressed but have a few things they'd like to change/add before publishing.
For self-created courses, it's generally been quite accurate and we're playing around with some eval metrics to make it as good as possible, but it's definitely a concern.
So in less promotional words:
- For official courses the creators are doing some quality control and do necessary fixes. - For self-created courses there is zero human supervision or quality control.
Is that correct?
Is the course creator being impressed the most important metric? Are there other more concrete metrics you are able to use to determine quality from the perspective of a student?
I am curious if you are using any methodologies from the digital learning space like knowledge tracing to help ensure that learners are actually retaining knowledge and improving over time or knowledge mapping to understand the gaps that might exist in your content?
Do you maintain your own skills taxonomy? Are you tagging your questions or assessment events with knowledge components or skills of any kind to understand what you are testing your students for?
All of this is really cool, I’m just curious at what level you’ve gotten to on some of this because there is a very fine line in online educational content between making the students life more difficult and actually helping them learn, especially when you get into auto-generating content, and especially if you aren’t following solid principles to verify your content. (I work for an online education company and particularly in the space of training LLMs and verifying their outputs for use in educational contexts)