This kind of dataset is really valuable because most conversations about AI coding tools are based on anecdotes rather than actual usage patterns. I’d be curious about a few things from the sessions:
1.how often developers accept vs modify generated code 2.which tasks AI consistently accelerates (tests, refactoring, boilerplate?) 3.whether debugging sessions become longer or shorter with AI assistance
My experience so far is that AI is great for generating code but the real productivity boost comes when it helps navigate large codebases and reason about existing architecture.
1. can only partly be answered, because we can only capture the "edits" that are prompted, vs manual ones. 2. for us actually all of them, since we do everything with ai, and invest heavily and continously, to just reduce the amount of iterations we need on it 3. thats a good one, we dont have anything specific for debugging yet, but it might be an interesting class for a type of session.