Myth busting -machines reading our minds |
Neurohacking - Basics | |||
Written by NHA | |||
Sunday, 04 August 2013 10:00 | |||
Neurohacking and personal security Aug 2013
There's a wave of societal panic going around again about neurohackers back-dooring mind games to extract personal information from the user. The fear is that simple low-cost devices now available, such as the Emotiv, Neurosky or even ThoughtStream, chart our unconscious responses to questions and reveal unconscious recognition. For example, if we wired you up and showed you a series of numbers while asking 'which of these is the first digit of your birthdate' or 'which is the first digit of your pin #?', we would be able to tell by your response which digit was correct. We're looking for the 'P300'; a clear signal of recognition in an EEG readout. By such methods, the anxious imply, we could all be unknowingly hacked into revealing our personal data via the internet and a BCI. “Hackers backdoor the human brain, successfully extract sensitive data” -claimed ExtremeTech's site last August, and we felt it was only a matter of time before this hit national headlines. But then a year went by...and we don't see any mad scramble to ban these devices or install legislation about their use because in reality they represent no more threat of revelation than an ordinary GSR and a lot less threat than someone watching your face through a webcam, especially if you're drunk. There is one sensible paper on this subject and it is here: “On the Feasibility of Side-Channel Attacks with Brain-Computer Interfaces” https://www.usenix.org/system/files/conference/usenixsecurity12/sec12-final56.pdf
The flap started when this article revealed how researchers basically designed a program for the Emotiv EPOC that flashes up pictures of maps, banks, and card PINs, and makes a note every time your brain experiences a P300. Afterwards, it’s easy to pore through the data and work out — with fairly good accuracy — where a person banks, where they live, and so on. Our first thought was that anyone daft enough to stare at a screen which suddenly starts flashing this sort of thing at them deserves to be hacked. ...But the long and the short of it is, this type of hacking relies simply on recognizing 'recognition' in someone else and it is nothing new. It has been possible to hack personal data ever since people became aware of facial microexpressions and began training to read them. You don't need any tech to tell what's true of false with this training; recognition is clearly readable in all human faces. If in doubt, consult the work of Paul Ekman: https://en.wikipedia.org/wiki/Paul_Ekman
Even simpler (and a far older method) for extracting personal information, -as every CIA agent knows- is getting someone drunk and/or sleeping with them. That's why secret agents can be real tarts. That extra boost of norepinephrine/oxytocin makes us all feel so confident, trusting and safe, we happily blab our most private details in this state of mind, and if we think about it many of us will have partners, family or friends who already know our birthdates, bank details and passwords. In real life, however, we don't get interrogated by strangers online using BCI devices, and would get extremely suspicious if any game started asking us questions of the personal data type. If you're paranoid about the possibility, simply don't use a BCI with an online machine to play your games, but bear in mind you're much more vulnerable when you're drunk and someone is talking to you while watching your face. ...And finally, a tip for any wannabee journalists writing about those who would want to extract such details via mindtech or any other means -these people are not called 'Neurohackers'. These people are called 'Assholes'.
|