It’s been a good week for the weird fringes of ed-tech. On Tuesday, Nicholas Negroponte took the stage at the 30th anniversary of TED, the starry technology-entertainment-design conference. He’s the mastermind of One Laptop Per Child. It pioneered the idea of low-cost personal devices for students around the world.
Negroponte has been criticized as an extreme apologist for the “teachers don’t matter, kids + tech = MAGIC” view of ed tech. Chris Anderson, TED’s curator, asked him this week for “one final prediction.” He said, according to the blog Ars Technica,
“In 30 years…we’re going to be able to literally ingest information. Once information is in your bloodstream, some kind of mechanism could deposit the information in the brain. You could take a pill and learn English or the works of Shakespeare.”
Sound improbable? Surf over to the Wall Street Journal. Neuroprosthetics–literally, brain implants–are coming to market. These include the cochlear implant, for hearing, and a retinal implant for seeing which received FDA approval last year. Soon, we may see devices that enhance memory, focus, and the speed at which we acquire new information. One group of researchers at UC Berkeley is working on a wireless brain interface made up of thousands of microsensors. Each the thickness of a human hair, they call them “neural dust”. It’s the stuff that sci-fi horror movies are made of.
Exsiting brain-computer interfaces, such as the headsets made by NeuroSky and Muse, pick up large-scale patterns of brain activity. They can give instant feedback about how calm, focused or excited you are. Jan Plass at NYU, who I met recently at South by Southwest EDU, is testing the Muse on students using Cerego learning software. They want to see if the feedback can make you smarter, and the software more customized.
Let’s not forget the brain prosthetics we’re already carrying around, known as smartphones. Soon voice-activated wearable devices will further blur the line between knowing a fact and being able to retrieve it. When I start singing my daughter a Simon and Garfunkel song, the next generation of Google Glass or a similar device will automatically bring up the lyrics.
These breakthroughs are fascinating. They’re also troubling, because we don’t really know very much about how the brain works normally. In fact, we don’t even know what we don’t know. The Achilles heel of any technology designed to make us smarter is that people are still smarter than any technology. If you use any kind of designer brain enhancement, you’re voluntarily limiting your own mind to the uses that some designer imagined for it.
Take one futuristic technology that’s already in place in classrooms today: automatic grading of essays using natural language processing. You can find it on the GED and the GMAT, and in products by Pearson and CTB/McGraw-Hill. These programs are still in early stages. They don’t parse meaning; they use crude metrics like word count, frequency of words, spelling, punctuation, plagiarism (based on text available around the web) and some sense of sentence structure. The ratings they give are comparable to the scores given by humans, but that’s not saying much. Essay questions on standardized tests are graded by contract workers who are paid per test and who must spend no more than a minute on each essay in order to earn a living wage.
This week the Contra Costa Times talked to a seventh grade teacher who is using robograding in her classroom. She said it was great! Saved her so much time! Except, “the robo-reader only can score submissions that are on topics the manufacturer has prescribed and the majority of those don’t have any bearing on the books her students are reading.” Do we want to live in a world where our brains are limited to “topics prescribed by the manufacturer”?
[Cross-posted at Hechinger Report]
Feed the Political AnimalDonate
Washington Monthly depends on donations from readers like you.