With all of the conversations, conference sessions, government initiatives, and books on the topic of 21st century learning, personalized learning, etc., one would think we’d have a clear sense already about the future of learning. I’m not sure we do though. We truly do need to be and produce lifelong learners – I heard that term for the first time in the early 90’s and only in the past decade has it really resonated for me given the acceleration of change we are experiencing. I was at a traditional conference with 1200 others this past Thursday and Friday and an Edcamp on Saturday, doing my lifelong learning thing. I have recently switched to taking notes live on Twitter and find myself immersed in a 3-dimensional learning experience. It’s a bit disorientating and mind boggling to be honest. It’s challenging to focus in the physical session, taking relevant notes (tweeting), while engaging with other tweeters in that room and in other sessions I’m not physically present in. Also, people not attending the conference chime in via Twitter and I engage with them. It’s an exhausting but exhilarating way to learn. I wish my technology could help with this… I starting thinking about what could (should) learning really look like. What could transformative learning look like? How might technology be the disruptive agent?
I find that most people are not able to think beyond what technology can do today, into a very different future. It is hard to reimagine the now as something different. But, come along with me for a little reimagining… What if our “phones” take another disruptive leap in the near future and become true personal digital assistants? Imagine that after a suitable “learning” period, our phones can “know” how we think, our preferences, what ideas stimulate us, etc. What if they are able to “storify” what we hear, see, and share, and include the backchannel feeds into a learning and knowledge repository? These new tools would “know” what points and ideas would resonate with us and would highlight them and bring in other connections from blogs, news articles, other tweets, and feeds. In bringing together relevant information and people, our tools will then pre-synthesize and prioritize for us. As well, our “phone”, which increasingly understands us, can tweet and share out on our behalf saving us the time to do so. Imagine this all happening in real time. Okay, so now we’re free to focus on the synthesized whole of the live speaker, the other sessions or teachers, and the stream of related information from other sources without having to deal with all the distractions of seeing, recording, and synthesizing the details.
With our new phones doing the work of assimilating relevant information, we’re now able to engage in highly informed and critical face 2 face conversations with other people. Perhaps keynotes, workshops, and classroom teacher lectures are now chunked to create time for small group cooperative learning. The heavy lifting of capturing, relating, and synthesizing information has been done, now we learners can add our purely human attributes into the mix. We can engage our emotions, our biases, our experiences, with each other and with the broadly synthesized content provided to us by our phones. Imagine how deep the conversations could be, the scenarios that could be created, the problems that could be contemplated? Most or all of the research and assimilation is done, freeing up time to go deep, engage, and transform our thinking. Our learning experiences then would go way past hearing, recording/repeating notes, and shallow conversation to rich, deep, meaningful discussions, problem solving, goal setting, initiative design, and project planning and actioning. These latter activities are what we strive for, they are the reasons we attend classes, workshops, and conferences.
I know, you’re thinking “science fiction”. But think about the tools we all take for granted already that past generations would have considered to be wild dreams. My phone today forecasts traffic problems along the routes I regularly drive to/from work. It receives/sends messages, pictures, videos to/from people anywhere in the world in seconds. It speaks to me and I to it. It reminds me when I’m supposed to be places, how to get there with maps, who will be there with me, and why I’m going. It lets me search vast databases around the world while speeding along in a train or airplane, and talk to other people with full motion live video. It lets me read millions of books, tracks highlights and notes somewhere “in space” that other people can also read and use. It tells me which direction on a compass I’m heading. It knows about all digital conversations I have. I could go on and on but the point is, this little handheld device seems to know no bounds to taking over things other machines, people, or I used to do on my own. Is it a stretch to imagine it gaining understanding over what it “knows” and does? Perhaps Steve Wheeler is writing about “an early version” of this in his Next generation learning article. Would this reimagined phone not simply be a faster computer with a more sophisticated knowledge representation and processing algorithm? Is this perhaps a 3rd or 4th generation version of IBM’s Watson in your pocket?
“If you make some very logical, and even conservative, assumptions about where technology is likely to lead in the coming years, much of the conventional wisdom about what the future will look like becomes unsupportable.”, The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future
What could students do, what could you do, if you had such a device? How might you reimagine learning for K12 students where every one of them has such a device? What would conferences look like where every participant has such a device? Let your imagination go…