We’re not there, yet: Thoughts on Voice and Artificial Intelligence

I have two old acoustic guitars at home. They’re pretty smashed up. You could almost peel the body apart if enough force was exerted. In effect I keep it glued together with a lot of electric tape. Aesthetically it’s horrendous, but it does give off that artistic vibe, which truth to be told isn’t my aim. My aim is simply to hold the instrument together because you can’t replicate that sound anywhere else. More than 10 years worth of playing. The sound grows with you and, somehow, evolves into what you make it.

And that’s the beauty of second hand guitars. Or any instrument. There’s a story behind the wood in the same way that there’s a story behind the cast iron skillet you inherit from your grandma (these are the best — all the flavors of the past generation infused into that skillet). Or an aged casket of wine. Somehow, there’s so much more meaning when you inherit something from someone. Like buying your guitar second hand. Because it was broken in by the previous artist.

I was listening to a podcast on Radiolab and Jad and Robert were discussing artificial intelligence; talking to machines to be more precise (do give it a listen — it’s an amazing episode) and how machines can, in some way, learn.

Have you ever owned Furby? It was a stuffed toy that could talk to you. I had one in the 90’s. It was creepy because it could mimic learning. Out of the box a Furby would speak in its native “Furbish” language which was essentially gibberish. But then after talking to it for some time, it would start learning English and speak back. Imagine you’re a 10 year old kid and you owned one of these. Mind blowing.

So I got a 16GB iPhone 4s. Thanks SMART (I’ve been working with SMART since last year) and like any other geek I decided to work on Siri and see just how good she is (I’m calling it a “she” for lack of a better gender reference). And yeah, it’s OK. She seems to understand what I want to do, about 60% of the time. Probably because of my accent and certain Filipino names that it tries to digest. But that’s not the point. The creators of Siri said that she can learn from you. And the “learning” part totally intrigues me. But as my title says, we’re not there. Yet.

Because this is how I see voice technology: it is the most human of technologies, more human than touch. When a computer starts to recognize your voice, learns from it, detects your emotions from the way you speak, what happens when you sell your phone?

When you buy a second hand phone in the future, are you inheriting an artificial intelligence that was succumb its previous owner’s lifestyle? Because if it did, I would think twice before reseting it. Will it miss his voice? Will it make suggestions on places to go and music to play based on its previous user? Will it paint a picture of its previous user, so clear, that it would give you amazing insight on a complete stranger’s life? That’s a movie script right there. Wouldn’t that just be so amazing?

Yes, it would. But we’re not there. Yet.

2 Responses to “We’re not there, yet: Thoughts on Voice and Artificial Intelligence”

  1. blankpixels says:

    Interesting thoughts, Jayvee. Right now, I’m already having difficulty letting go of my old gadgets because of their “sentimental” value plus the fear that someone might find something that I thought I already deleted. LOL

    I hope we’ll get there. :)

  2. Rye says:

    They might just store all that on the “cloud” and the AI learned from by your previous gadget could be synced (ala knowledge transfer) to your new gadget…

    It’d be really frightening though if hackers get into that cloud. They could find sh!tload of embarrassing stuff about users.

Leave a Reply

Advertisements
cheap android tablets