Recently I was reading an article about Dadbot—the chatbot created by James Vlahos that aimed to emulate his father. The idea for the project came about when his father was diagnosed with stage IV lung cancer, which compelled James to create a bot that could recount his father’s life story in his father’s distinctive manner and with a sense of his personality. James started with a series of manuscripts generated from hours of interviews and worked from there.
I found this story and process touching. It tugged at my heart as I thought of all the conversations I had with my own father that I wish I had recordings of—all the bits of stories and the personality that my kids will never get to know. A lot of our conversations were mostly lopsided and involved a lot of me talking and finally some very short, sage advice from him. What if the stories, personality, and advice could be captured in this medium?
As I learned more about James’ process of creating the bot, I started thinking about one our favorite movies: Night at the Museum, where statues come to life every night and antics ensue. For me, the idea of statues like Teddy Roosevelt (portrayed by Robin Williams in the movie) that are wise and full of encouragement captures something that could never be expressed by reading a short blurb, or hearing about him in history class. Williams’ portrayal of Roosevelt he gave him a personality, a voice, a demeanor, and so much more.
I then thought to myself: What if something like this could be done today? What if I could get a small taste of who someone was without it being a complicated animatronic creation, because there is just something about a conversation that is much more engaging, at least for me, than just listening about the person or to a recording?
What if I could go talk to a facsimile of Albert Einstein, Teddy Roosevelt, Mother Teresa, or a plethora of other great minds without the burden of a fake phone call or a continuous pre-recorded speech?
What if, like the basis of the first attempt at Artificial Intelligence (AI), a chabot was created based on what we know of the person without the cumbersome keyboard interface or the overly complicated visuals that most Augmented Reality (AR) technology requires?
What if I just go and have a conversation with an inanimate object that could talk back, not in the far distant future, but today, with real world current technology?
Given that AR displays, in real time, the natural world overlaid with computer generated sensory input, the idea of truly bringing statues to life is possible. For instance, one semiconductor company that could untie the tongues is Microchip Technology. They provide microcontroller, mixed-signal, analog and Flash-IP solutions that together could create this form of augmented reality. One specific component that I found interesting is Microchip’s Voice Recognition family, which offers speech recognition of 100 common words in under 500ms.
The speed and ability to operate in an offline environment and to output the recognized word to the branching program are the keys that would make my idea possible. Making this very basic version of AR is possible and would be an interesting addition to going to the museum—if nothing else to say hello to one of my all-time heroes, Teddy Roosevelt.