Happy to answer any questions people have about OpenBCI https://openbci.com/
Gabe seems to talk a lot about "inserting" data (like feelings) into the brain, instead of reading it. Is the technology really there already? Can we reliably read data from the brain (i.e. using it as input for a digital system)? And regarding inserting, what is the coolest thing you've done that you can share with us?
As far as "writing" back into the brain, the coolest thing I've seen was the "BrainNet" project from University of Washington which used transcranial magnetic stimulation (TMS)
https://www.washington.edu/news/2019/07/01/play-a-video-game...
The science and tech is advancing very fast, but I think it's not accurate enough to be in everyday use yet as a controller for devices. 90% accuracy sounds great in a paper, but imagine if your mouse clicks or keystrokes didn't register 1 out of 10 times.
What feels way more likely is that we'll see biometric data being collected by more consumer tech devices (cellphones, laptops, headphones) and used as one of many inputs to improve software applications and operating systems. Could EMG or EEG data be used to improve iOS autocorrect and reduce fat finger mistakes? That's a mundane application for crazy tech, but it's the kind of thing that I think will be a necessary intermediate step in us learning how to use these types of signals in everyday ways.
I know the example they put out, a horror game that responds to your fears by reading the data you produce. Is the tech actually at a point were a dedicated team can accurately and consistently identify bio markers correlated with a fear response? Or diffrentiate between fear, anger, happiness, etc?
Has there been any research into using this tech effectively to treat anxiety/depression? Really interested in this focus once "writing" information is possible, especially with more severe mental health issues that aren't related to physical brain deterioration. But it seems (per your comments) that that's still a ways a way, haha. Would be really cool to be able to see a psychatrist and have them treat my anxiety with a few bip bops from an industrial headset, or go home with a prescription program to run once a day, that just sounds like science fiction.
For example if you want to find the memory address for your guns ammo you search for a start value in memory say '30', get all addresses that match that value, fire the gun and then find which of those addresses now have the value 29. Continue the search until you narrow down the memory address to just one. Then you can use that address to query the ammo for a 3rd party program that alerts you that you're low on ammo or even write to the memory address to give yourself more ammo..
Obviously the brain isn't as discrete but I feel like if I could play around with a BCI I could find fun signals for when I'm thinking about 'apples' vs 'oranges' and slowly build up an interface.
Have you been able to use a BCI to detect when you're thinking about something specific?
Several groups have shown that they can "decode" a remembered image from brain activity. This is comparatively easy when the images are simple and there are only a few possibilities, but can generalize to larger sets of images and even (sort of) never-before-seen ones. Sensory and motor information is relatively accessible; I don't know that anyone's making great progress decoding thoughts like "I should be home by 8pm".
If this is up your alley, you may want to check out work done by Frank Tong's lab at Princeton (e.g. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1808230/) or Jack Gallant's group at UC Berkley (e.g, https://www.sciencedirect.com/science/article/pii/S105381191...), among others.
(FWIW: absolutely not an expert)
https://www.washington.edu/news/2019/07/01/play-a-video-game...
All the reconstructions are super low resolution, but I wonder how much better they can get? I'd be much more open to brain-computer interfaces if it didn't involve drilling through my skull.
[0] https://scitechdaily.com/image-reconstruction-from-human-bra...
You can use ECoG which is like EEG but placed on under dura-mater (a protective bag around the brain) but this is also highly invasive. Also the spatial resolution is pretty bad. You can get a hundred or so sensors in a few square centimeters.
I remember hearing way back around 2010 that Valve was messing around with BCI's. Valve has as recently as 2019 talked about EEG's and gaming [0]. Also recall reading about similar concepts over a decade ago. [1]
[0] https://venturebeat.com/2019/03/24/valve-psychologist-explor...
[1] https://vpa.sabanciuniv.edu/phpBB2/vpa_views.php?s=4&serial=...
It doubt very much that it reconstructs from the visual cortext. That is a non-trivial task with fMRI which has a much higher SNR than EEG.
It's basically using a pretrained VGG network as a latent space, which lets them synthesize images.
Let's say you have certain nero indicators which indicate person X may do Y .
Do you we then minority report lock up people.
At the same time , it's going to be amazing for people in a coma or unable to communicate. We could have a fantastic amazing world, where we effectively live forever via some type of matrix like interface.
To me the primary issue of our day at this moment, is we have no users bill of rights. We are forced to use the services of large tech companies to participate in modern life yet there are few restrictions on their powers.
It's bad enough when it's our written thoughts. But the idea that it can be my unwritten thoughts too... that scares the heck out of me.
The less you use social media the happier you'll generally be. No matter what you say the context around it can change and your life destroyed with it.
Breaking out of the social media matrix has been one of my best decisions, I was able to really live in 2019. Made tons of friends as well as a few partners. All of this happened in this place called reality.
Is it theoretically plausible that this activity could be read from many feet away, or further?
Hardware makers often aren't good at software, not to mention software updates. And even that gets wonky, like when Microsoft update got hijacked.
Point is, if we can't get IoT 100% right, or even 90% right, how can we trust IoT with physical interfaces into our bodies? That's the problem. And then what happens if the company who made your implant goes out of business? What do you do when those updates stop? Look at cellphones, supporting a cellphone for 2 years is too much for most hardware makers, they rather never update it.
Point is, even if the tech is 100% possible, we're way too far from business setups that allow for this to happen.
> if we can't get IoT 100% right, or even 90% right, how can we trust IoT with physical interfaces into our bodies?
Trust is a risk-reward calculus for any product, whether it's BCI, the microcontrollers in your car, or the (hopefully uninfected) produce in your supermarket. Many folks will find the BCI value outweighs the risks for some feature set that matters to them.
> And then what happens if the company who made your implant goes out of business? What do you do when those updates stop?
Certain applications will have to be designed and evaluated with longevity or long-term support in mind. You don't need that when talking about cellphones. You do for today's pacemakers.
Updates of any sort to the SW are generally very difficult to do. The certification process for any medical device is hard enough, let alone for implants, let alone for life-critical implants, let alone the recertification process for an already implanted life-critical device. This is why you'll see sonogram machines running XP still, completely disconnected from the larger internet [0].
The questions listed are mostly already considered and have many mitigation strategies per the regulatory agency in charge. There are many other questions that people like the FDA demand answers to.
One thing about the business side is that risk/reward ratio. In medical development, FDA authorization typically occurs at the 10-12 year mark for a product (though it varies widely). Meaning that your start-up only gets the go-ahead to make money after about a decade of investment. That said, once you get to sell, you have an effective monopoly on the market. Hence the costs of new drugs and devices being so insanely high; it's that risk/reward imbalance on the business side.
[0] Another reason why IoT medicine is very difficult, among many.
Either accept that it can broke or always have a backup plan.
>then what happens if the company who made your implant goes out of business?
The company goes out of business doesn't always mean the product stop working anymore.
If the product stop working then I'll look for removal/replacement.
If you got eyes that required the internet to function you made a poor and dangerous choice regardless of whether the public or some company wants to punish you for some perceived public behavior. If you got eyes that required some company on the internet's continued acceptance of you as a client, you made an even worse choice.
The theme of CES for the past decade has been making products worse through software.
The ramifications of access like that are described surprisingly well in the show.
For instance, two close friends (they're in a relationship but young enough I'd just consider them close friends) use the direct wired connection feature because they feel comfortable enough doing so. But at some point one of them decides to use that private access to plant a virus of sorts. While the purpose is irrelevant here, it does show how brain connections, especially between people, can become increasingly vulnerable.
The main plot point of the anime though, is this game application that can be installed to the Neurolink, and basically it has the ability to also wipe away or hide memories.
While what Newell is talking about is still a ways away, it's interesting to consider exactly what we may need to be worried about when the time comes.
Even if they decided that they want to go public with a virtual keyboard/mouse/controller and virtual heads-up display that you use by wearing an electrode net, the current team would not be able to make that pivot. End users won't be debugging Labview sketches, spending weeks training a neural network to recognize virtual keystrokes, or shaving spots on their scalp.
Personally, I expect that the V1 product here is approximately just that: a game controller. A few buttons, a pointer, maybe a little haptic feedback. It would be great for me if it supported text input and output faster than a keyboard and terminal, but I don't think that's likely to happen in an early version.
Even less likely is that we're going to jump straight to The Matrix. Valve needs to admit that, and be happy with a limited version, instead of letting it fizzle out in the lab.
I, for one, welcome the mind-reading devices.
I'm sure it'll be limited by physical contact so I'm not much worried about some science fiction scenarios. However, the implications for police are indeed worrisome, enough it should warrant specific legislation as soon as possible (i.e. non-consent mind reading should be made a grave crime, and a war crime, as soon as the technology is available)
I find this to go in a bizarre awkward direction. We as humans have self regulating capabilities that are to be tapped with some "self work". Is that a bad thing? Why are we trying to change that and create an app to control bodily functions? Our brain is perfectly capable of handling that and has been for millions of years in our ancestors
Being able to run "sleep" like its an app on your phone and directly control your brain would be life changing for us. I really hope it happens in our lifetime.
I keep my old glasses that are smaller to be able to fit them in my vive, but that makes development inconcievable: Remove terminal/monitor glasses and put on old glasses and then the headset every time I need to test something in VR. Nope I'm not doing that.
Also why are the media business still adding fake laughter and annoying background music to everything they touch?
https://www8.hp.com/us/en/vr/reverb-g2-vr-headset-omnicept-e...
https://marshallbrain.com/manna
(It explores implants and using them vs. being used by them.)
Valve collects lots of data - that you can't turn off - and doesn't let you opt out.
(see if you can ask them to stop collecting time played in games or "achievements" or whatever)
"While technically hostile, Adbugs do not directly attack Vault Hunters. Instead, they follow their targeted Vault Hunter around and beam a semi-transparent advertisement at them, which is placed in a random position on the field of vision, obscuring part of the field of view."
Imagine living only half-life.
But all jokes aside, it could be brain damage, but drug addiction might be a better analogy. It could be something that takes time for neuroplasticity to repair or correct.