Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Neurovis: Visualizing brain signals in 3D in real-time (neuropro.ch)
109 points by breck on Jan 5, 2018 | hide | past | favorite | 56 comments


The consumer grade EEG's you would use for this, like Open BCI, place rudimentary sensors on the scalp. There just isn't enough information from these types of sensors to give you a real look at what signals are occurring in the brain. This is a fundamentally flawed premise and is nothing more than a toy. Real brain research must be done with FMRIs or much more accurate EEG's than consumers have access to, and institutions with such advanced tech already have vis software.


What is the most advanced open source solution to the brain computer interface issue? Afaik they had gotten pretty good at decoding even the low sensor density data into actionable input, but all solutions suffer from input/compute lag.

So that said, FMRI's won't work for realtime will it due to how long scans take? So EEG's seem to be the only choice. Perhaps to make up for accuracy you have to increase sensor density enough and you should get a system thats fairly accurate. My guess is that most modern attempts have poor sensor density.


There are many open-source solutions for BCI projects like BCI2000 (http://www.schalklab.org/research/bci2000), OpenVibe (http://openvibe.inria.fr/) and EEGLab (https://sccn.ucsd.edu/eeglab/index.php). That's based on the kinds of tools that our customers use. Most of these, aren't as pretty as tool like Neurovis, but in reality most of the information that we make use of to control prosthetics or signal intention involve looking at the temporal-frequency relationships between and within broad regions of the brain. There isn't a lot that you gain from just looking at the brain light up like this for BCI — the main use for a visualisation like this, is as the docs say, for diagnosis and determination of epilepsy since in epilepsy the activity you'd see is much higher than usual.

EEG has better temporal resolution than FMRI; you are measuring the electrical activity rather than vascular changes, the former changes more rapidly than the other. EEG, however, is just the surface activity of the brain, so you don't get information about 'deeper' (physically) brain processes; this is where FMRI is invaluable. EEG is also limited to the size of the electrodes and how many electrodes you can physically place in one location. 256 electrodes on an EEG cap is about the limit you can get to.

Electrocorticography (ECog) involves implanting electrodes on the dura, this can substantially improve the density of electrodes in a given area, however this doesn't measure deep brain activity and we have no way of leaving the electrode grid in place for long periods of time without risking infection. For BCI, we've been able to classify more classes of data using ECog than EEG — research by Kyousuke Kamada and Gerwin Schalk are informative. It's a very promising area if we can work out how to implant the electrodes, seal the skull and telemeter data out.

Magnetoencephalography (MEG) can help with measuring deep brain activity, but there are other tradeoffs to consider. Essentially, the point where we are now is combining multiple techniques to get the best temporal, spatial and frequential compromises.

Thus in answer to your question; no FMRI is not great for realtime responses, measuring the electrical activity has better temporal properties so EEG and ECog work better here. Sensor density is part of the problem, but once you solve density you then need to consider how you'll deal with deep brain measurements.

Background; I own a company that distributes BCI equipment for g.tec medical engineering in the UK. We've been operating in this space for 8 years. I have a PhD in Pharmacology and a speciality in electrophysiology.


Thank you so much for taking the time to answer my questions. Looks like I have some reading up to do.

The reason I ask about this is because my long term thinking is that while motion controlled virtual reality is fun and going to be good for exercise games and training, I think human-brain-computer interface represents the most promising of the interaction methods (nothing says you can't be hooked up and still typing, using a mouse, or motioncontrolling either).

You mentioned the medical desire of seeing deeper brain waves, but I just want to control a computer, so given the current state of the EEG's are they useful yet as a practical operating system control input?


And part of the reason EEGs have poor sensor density is that attaching and removing them is a pain in the ass, especially if you have long hair.

I participated in a study where my EEG was taken while I watched some random videos, and half the time was spent on putting enough conductive gel in my hair below the sensors to get a stable signal, and afterwards I had to shower to get rid of the stuff.

There was a bald guy in the study who amazed everyone when he could just put on the cap to get a signal, so EEG might work as brain-computer interface for some people. But for the general population, it's just too much of a hassle.


EEG nets that use saline solutions are much more manageable than gel-based nets from what I've seen (disclaimer: only ever had to use the saline solution nets). That said, avoiding crosstalk between electrodes / bridging was a bitch.


See here for all the tradeoffs involved in current brain imaging technology: https://waitbutwhy.com/2017/04/neuralink.html

Long story short, if you want a real time interface that also has high spatial resolution, you need to get invasive (and increase the number of electrodes by a lot)


There are real time fMRI applications. You just won't get to have one in your home. 3 Tesla's dude. That makes metal fly, and is hyper expensive.


I had multiple MR and CT scans, some EEG - from all these only EEG is kinda "realtimish", isn't it? Do we have any technology in these days that can _map_ surface electrical changes to brain internals (depth data) or anything that can work in true 3d?


Yep, only EEG gives you that kind of information. fMRI takes a lot of post-processing with complex statistics to produce any kind of useful information. I worked on a study that used fMRI to study the involvement of early visual areas in the cortex during reading tasks, and when processing long runs we'd get on with other stuff while the data was being processed.

There isn't any existing tech to take surface data and "map" it to internal states because there isn't a simple relationship between on and the other. For real-time recording of activity deeper in the brain than a few centimetres you need to implant electrodes during surgery, which is obviously only something that happens in humans when that person needs brain surgery already.

There's some interesting stuff being done around decision-making with implanted electrodes in monkeys, but that's understandably not to everyone's taste.


EEG can do real time imaging because the sampling rate is much better. EEG also requires fairly intensive data processing, but usually it can be done faster than fMRI because: 1. There is often less/different information in EEG. If you have 64 nodes, you will have less information to process than the >100000 voxels in fMRI. 2. There have been more reasons to look at EEG in real time. fMRI analyses are generally done with group level statistics, so usually you have to run many participants which takes weeks-months anyways.

>There isn't any existing tech to take surface data and "map" it to internal states

Eh, sort of. Algorithms like LORETA which do source reconstruction have been around for a while, and do an OK job of finding the general area of the source of a signal.


> fMRI takes a lot of post-processing with complex statistics to produce any kind of useful information.

So does EEG. The reason EEG is closer to real-time than fMRI is because of the sampling rate. With fMRI, .9 Hz is considered quite good. For EEG, 2000 Hz is considered standard.


Standard? 2000 Hz sampling rate may be used with intracranial electrodes.

If the EEG is measured from the scalp 200-250 Hz sampling rate is usually good enough. It enables EEG recordings up to 80-90 Hz. Most clinical uses are interested EEG below 40 Hz.


My mistake: I got mixed up and reported standard sampling rates for MEG.

You are, of course, correct.


Thanks for the clarification. I've only worked with fMRI, so wasn't aware of the ins and outs of EEG processing despite being aware of the spacial / temporal resolution difference. It makes this method even more suspicious!


> when processing long runs we'd get on with other stuff while the data was being processed.

The main reason why fMRI processing takes so long is the reluctance of the major neuroimaging suites to embrace GPUs. A newer suite, BROCCOLI [1], is substantially faster but has much less adoption and isn't quite as full-featured as the other suites.

[1] https://www.frontiersin.org/articles/10.3389/fninf.2014.0002...


Not just GPUs, most labs don't embrace parallelism in general. I managed to get a 55x speedup between serial and parallel versions using the same program (SPM) by using several mid-grade computers and caching (Sorry I don't have anything written on this, I've been putting it off).

It's hard to get researchers to switch using the algorithms that they know and are comfortable with, which is why I think stuff like BROCCOLI, or PSOM haven't taken off.


There are real questions regarding the degree of locality information from eeg. The temporal resolution though is measured in ms.

There are real time fMRI research...particularly in the use of classifiers. The temporal resolution tends to be around 2-7 seconds (2 seconds for each acquisition, but the hemodynamic response is over the course of 7+ seconds..


You can do LORETA inverse solution to go from EEG to voxels which approximate internal sources. The problem is that this model does several suppositions about the brain structure that may or may not be true. In my opinion, surface currents are only good for surface sources, as the neurons in cortex are aligned and thus generate relatively strong currents in comparison to neurons inside the brain which are positioned "randomly".


You can do real-time fMRI for relatively simple tasks. Saying a word, tapping a finger, moving the tongue. “Real-time” is the wrong phrasing as the data builds up over several runs but you see it as it builds and get something useful a minute of two into acquisition.


I'm no expert, but what about seeing the brain using radio-waves diffraction pattern.

You take an emitting antenna in-front of the person, 1 meter behind him you make a grid array of Software Defined Radios (like 10x10=100 SDR (SDR are so cheap that it will be ~ the price of an OpenBCI set) ), and you record the diffraction/interference pattern, then you software analyze it to produce a 3d image via solving the inverse scattering problem, to get the fluids flows and densities (you should even get the speeds through Doppler)).

My simple back of the envelope calculations, says that at 5Ghz we should rather easily get a spatial resolution of lambda/4=1.5 cm, maybe more using super resolution techniques. Going to 60Ghz, we should soon be able to reach millimeters spatial resolution.

It obviously won't reach the quality levels of FMRIs.

Can someone from the field tell what kind of bandwidth we should expect from such a setup?


EEG noise to signal ratio is so ridiculously low that you really need to be as close to the brain as possible. Any muscular activity produces electrical currents an order of magnitude higher.

As a side note: what an FMRI sees and what an EEG measures is fundamentally different. EEG can reliably only detect the Pyramidal neurons in the cortex an MRI can see below that.


The eye blinks... oh the eye blinks! Many a datapoint has been rendered useless by a dry contact lens.


I'm not suggesting that we measure the signal emitted by the brain like EEG do. I'm suggesting that we look at the fluids movements like FMRI do but using diffraction like xray-imaging with bigger wavelengths.


>I'm suggesting that we look at the fluids movements like FMRI do but using diffraction like xray-imaging with bigger wavelengths.

What would we gain from using RF-diffraction as opposed to MR?

The major advantage of fMRI is that you're measuring a so-called Blood-Oxygen-Level-Dependent (BOLD) signal. That is, you're looking at a contrast between oxygenated and de-oxygenated blood, which is more interesting than just looking at where blood (of all types) is going.

The underlying assumption is that brain volumes consume oxygen at a rate roughly proportional to the level neuronal activity. As such, a BOLD signal is really what you want.


>BOLD signal is really what you want

Thank you so much (I have no medical knowledge and that is some key domain-knowledge I've missed). How much correlation with blood flow is there ? Maybe the BOLD signal can be reconstructed from the graphs of all mixed blood flow.

>What would we gain from using RF-diffraction as opposed to MR?

Main gain is practicality. Main usage would be for brain interface. This is phase contrast tomography, so you can get the refractive index of the materials inside, and their speed.

It won't be as precise as MRI, but it will be more practical. More portable. It doesn't use magnets so not limited by maximum magnetic fields. It is more scalable if you have compute power, meaning you can add some SDR to increase the resolution and bandwidth if necessary. Time resolution would probably be better as it doesn't need moving parts (if beam forming). It's cheaper. It's non invasive and usable 24/7.

We already have wifi imaging, so this is just putting it in favorable condition so as to extract more info from the brain. Basically until we can get the full connectome activation real time, we are just side-channel attacking the brain to extract some info. I'm just trying to grasp how much and how useful is the info we can get using existing techniques.

For example you put this device inside every monitor screen (for 60Ghz or in a TV sized box for 5Ghz) and a software controlled radio emitter behind the back of your head. And if the useful info bandwidth is sufficient we could have vastly better interface to the computer.

This can probably be adapted for room-sized continuous body health monitoring if the resolution is not sufficient or if the info extracted is not relevant enough.


>Thank you so much

My pleasure :)

>This is phase contrast tomography, so you can get the refractive index of the materials inside, and their speed.

This is way outside of my domain of competence, but if I understand correctly, this would not constitute a BOLD signal?

I'm skeptical of the idea that blood diffusion alone correlates with cerebral activity. With fMRI, you're looking at temporary gradients of de-oxygenated hemoglobin rather than a net direction of blood movement. This is sometimes confused with "blood flow", and I've been guilty of interchanging the two terms myself :/ I suspect the fluid dynamics of blood in the head is outrageously noisy and difficult to contrast out or filter though statistical means (because it's roughly random).

Concerning the relationship between neural activity and BOLD signal this paper [https://www.nature.com/articles/35084005] is a good starting point. I'll see if I can dig up the PDF for you.


You are right this is not a BOLD signal.

I was hoping the brain was able somehow to vaso-constrict to direct its oxygen supply toward some region and that we could observe that. But from what I'm picking from you is that it's more like there is fluid everywhere, and cells are just consuming more or less kind of an open-buffet (versus cells waiting at the table and asking for more or less food).

^^found your paper without paywall but haven't read it yet [http://wexler.free.fr/library/files/logothetis%20(2001)%20ne...]


>wexler.free.fr

Holy shit, I am laughing my ass off. That domain belongs to a guy in my former lab :'D


Does de-oxygenated blood have a measurably different refractive index w.r.t. radio waves than oxygenated blood?


No, but de-oxygenated hemoglobin is paramagnetic, and therefore detectable using what's called T2*-weignted scan sequences.

http://casemed.case.edu/clerkships/neurology/Web%20Neurorad/...


No, that's why MRI has an undeniable advantage if it's what's important. What I was hoping, was that if the brain is somehow "opening" the various valves to bring more juice to certain area then that would be useful info (regardless of whether or not there is O2 in the blood, and where this O2 is consumed).


I see, so you are proposing a way to infer capillary expansion/contraction as a way to measure blood flow.


It’s always great to have informed responses here so thank you for contributing.

I gotta say though regarding the other comments on data quality, if a lay person read through the wiki pages on common brain imaging techniques (as I did) including fMRI, it seems easy to come away, rightly or wrongly, thinking, that’s all we have? That’s the state of the art spatial/temporal resolution? No wonder we can’t model the brain from first principles.

The amount of innovation, effort, and complexity that was needed to get imaging where it is today seems amazing to me, just glossing over the major achievements, it’s nobel prize level stuff. Yet you guys seem to have to still work in so much darkness with so little light. It seems rather than imaging one day simply revealing fundamental secrets, it’s just providing clues that will continue to require great insights and experiments to change the world.

But again, full admission of popcorn eating ignorance in these assumptions.


You hit the nail on the head. Cognitive neuroscience -- especially WRT functional imagery -- reminds of the fable of the blind men and the Elephant.

(Very) roughly speaking, you have the choice between:

- Excellent temporal resolution & crappy spatial resolution (EEG / MEG)

- Pretty good spatial resolution & crappy temporal resolution (fMRI)

Other techniques (NIRS, TMS, etc) fall somewhere along the spectrum and have their own quirks. We're peering into brain function with a super-low-res "camera" and attempting to draw only the roughest contours of the thing we're exploring.


What you want is better temporal resolution, better spacial resolution, a less restrictive environment and a more direct measure of activity. The indirect BOLD signal has a long delay and is hard to accurately separate from noise. The spatial resolution of fMRI is a fraction of that achievable by clinical imaging and the MR environment is very expensive, restrictive and generally unhelpful for many experiments.


And in an ideal world you want an even more direct measure than BOLD, which is why folks are examining perfusion imaging.



FMRIs or much more accurate EEG's

Or neural recordings with electrodes. Better spatial resolution than EEG and fMRI and way better time resolution than fMRI.


If you mean surface electrodes, then that is EEG: ElectroEncephaloGram. If you mean electrodes inside the brain, then you're talking brain surgery. I'm not sure what point you're making here - could you elucidate?


I did mean electrodes in the brain (or spine) and mentioned that for completeness, to make it clear there's another method besides fMRI and EEG. Which indeed has the obvious disadvantage it involves some surgery, but does the type of signal recorded with it does have advantges over the other methods.


When available in humans, that data is amazing. It is used frequently in animal models though.


The video looks really cool, even if it just shows how fast the bloodflow in the brain moves about. I didn't realise it was that quick before, so that was interesting. But their technology paragraph makes no sense:

"Unity takes advantage of the computational power of the graphical processing unit (GPU) thus greatly improving processing speed for real-time analytics and data visualisation."

And so does any other 3d application in the last 40 years. It's why we have GPUs. Unity does nothing special, and in fact doesn't even give you access to the full power of the GPU (like CUDA does).

"Streaming data from local files, the cloud or connected headsets improves memory usage and computational performance."

Streaming data from the internet improves performance? That makes no sense.


> Streaming data from the internet improves performance? That makes no sense.

The streaming part improves performance over loading the whole data in advance. I think that's what they mean.


This is as useful as plotting traffic data from NY into a map of SF.


This comment could be useful if you'd explain why, but as it stands, it's just a shallow dismissal of the kind we're hoping to avoid here. Please don't post like that.

If, however, you'd like to neutrally explain the point (neutrally = without snark or spin), so we could all learn something, that would be great.

https://news.ycombinator.com/newsguidelines.html


This quote - "facilitating intuitive interpretation of complex phenomena" - makes me very suspicious. Inference in neuroimaging is complex and you need to take a great deal of care when interpreting data. There isn't really such a thing as a good outcome from "intuitive" interpretation of neuroimaging data.

For example, there was a story in 2011 about some research that said the brain's "love centre" was being activated by a video of a vibrating and ringing iPhone. They were talking about the insular cortex, but the insular cortex is also involved in vomiting.


This is very cool -- does anyone have experience with the company or product? This looks like something I want for my home lab.


Cool, but practically useless.


Interesting but I think Heart is more ok. Brain? Imagine you put a eeg and ecg in an alphago neural network and looked at the heat map. Any use? That is not even close to the 100 billion plus Neuton we have. Wonder. But the graphic especially the heart is interesting.


Isn't alphago a dqn? If so, how would you generate any eeg or ecg heat maps? Or rather, how would you generate a state action space given that data?


The only new thing with these visualisations is that they are nicer than what is currently done. And in nicer I mean that they have more polygons and translucency. Most tools working with EEG do have extremely similar visualisations bundled with them as well.


Better than the Emotiv locked down software I hope? Worst part of the Emotiv experience is simply registering to be able to use their software...


NeuroVIS is just amazing.


Is it? It looks like something you could code up in a few weeks.. just take a random brain model, make it semi translucent, then map the data inside of the model to a continuous 3d smoothed grid, and voila


I'm not an eeg guy, but do fMRI. The little video on the site did not give me an indication of what makes it notable or useful. Eeg researchers create heat maps all the time over electrode points. They also look at timeseries on those electrode points.

Edit: I guess tht focus is on presenting the data as it is collected. Not sure if that is novel or not, but that is more specific than I thought at first.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: