It's an enormously cool project (and also feels like the next logical thing to do after all the existing modalities)
But it feels eery to read a detailed story how they built and improved their setup and what obstacles they encountered, complete with photos - without any mention who is doing the things we are reading about. There is no mention of the staff or even the founders on the whole website.
I had a hard time judging how large this project even is. The homebuilt booths and trial-and-error workflow sound like "three people garage startup", but the bookings schedule suggests a larger team.
(At least there is an author line on that blog post. Had to google the names to get some background on this company)
You should consider an "about us" page :)
Really cool dataset! Love seeing people actually doing the hard work of generating data rather than just trying to analyze what exists (I say this as someone who’s gone out of his way to avoid data collection).
Have you played at all with thought-to-voice? Intuitively I’d think EEG readout would be more reliable for spoken rather than typed words, especially if you’re not controlling for keyboard fluency.
It's interesting that the model generalizes to unseen participants. I was under the impression that everyone's brain patterns were different enough that the model would need to be retrained for new users.
Though, I suppose if the model had LLM-like context where it kept track of brain data and speech/typing from earlier in the conversation then it could perform in-context learning to adapt to the user.
I lol'd at the hardware "patch" that kept the software from crashing--removing all but the alpha-numeric keys (!?). Holy cow, you had time to collect thousands of hours of neurotraces but couldn't sanitize the inputs to remove a stray [? That sounds...funky.
How well does it work when trained for 100 hours on just one participant? As in a model trained from the ground up for just one person?
Very cool project! I had a couple ideas during the read:
* A ceiling-based pully system could help take the physical load off the users and may allow for increased sensor density. Some large/public VR setups do this.
* I'm sure you considered it, but a double-converting UPS might reduce the noise floor of your sensors and could potentially support multiple booths. Expensive though, and it's already mentioned that data quantity > quality at this stage. Maybe a future fine-tuning step could leverage this.
Cool write up and hope to see more in the future!
This is a cool setup, but naively it feels like it would require hundreds of thousands of hours of data to train a decent generalizable model that would be useful for consumers. Are there plans to scale this up, or is there reason to believe that tens of thousands of hours are enough?
This is an interesting dataset to collect, and I wonder whether there will be applications for it beyond what you're currently thinking.
A couple of questions: What's the relationship between the number of hours of neurodata you collect and the quality of your predictions? Does it help to get less data from more people, or more data from fewer people?
The example sentences generated “only from neural data” at the top of this article seem surprisingly accurate to me, like, not exact matches but much better than what I would expect even from 10k hours:
“the room seemed colder” -> “ there was a breeze even a gentle gust”
Cool post! I'm somewhat curious whether the data quality scoring has actually translated into better data; do you have numbers on how much more of your data is useful for training vs in May?
Interesting dataset! I'm curious what kind of results you would get with just EEG, compared to multiple modalities? Why do multiple modalities end up being important?
Really interested in how accuracy improves with the scale of the data set. Non-invasive thought-to-action would be a whole new interaction paradigm.
Makes sense that CL ends up being the best for recruiting first-time participants. Curious what other things you tried for recruitment and how useful they were?
Did you consider trying to collect data in a much poorer country that still has high quality English? e.g. the Philippines
What's the plan for after this mind reading helmet works reliably?
Loved watching this unfold in our basement. : )
what's the basis for conversion between hours of neural data to number of tokens? is that counting the paired text tokens?
Your engineers were so preoccupied with whether or not they could, they didn't stop to think if they should.
Those predictions sound good enough to get you CIA funding.
[under-the-rug stub]
[see https://news.ycombinator.com/item?id=45988611 for explanation]
Hey I'm Nick, and I originally came to Conduit as a data participant! After my session, I started asking questions about the setup to the people working there, and apparently I asked good questions, so they hired me.
Since I joined, we've gone from <1k hours to >10k hours, and I've been really excited by how much our whole setup has changed. I've been implementing lots of improvements to the whole data pipeline and the operations side. Now that we train lots of models on the data, the model results also inform how we collect data (e.g. we care a lot less about noise now that we have more data).
We're definitely still improving the whole system, but at this point, we've learned a lot that I wish someone had told us when we started, so we thought we'd share it in case any of you are doing human data collection. We're all also very curious to get any feedback from the community!