Monday, March 21, 2011

The Multifaceted Effects of User Profiling in the Entertainment Industry

The CEO of Hulu, Jason Kilar recently made a post on the his company’s blog, giving a startlingly direct overview of their marketing strategy and his predictions for the future of TV and online video distribution in general. A key portion of his argument focused around innovation and increased efficiency in marketing. Like many, he banks on the assumption that in the near future, advertisers will be able to collect viewer information in order to more accurately target a desired audience.

Indeed marketers have been trying to collect this information for decades, however establishing such a system requires drastically overhauling the way users currently interact with websites and distribute their private data. For an exchange like this to work, it seems necessary to have demographic and use information be tracked and distributed by the web browser itself. There’s growing evidence that Google Chrome might soon be doing just that- in fact they already use a similar model through their popular email client Gmail where ads are chosen based upon keyword analysis of a users’ recent messages (a practice which continues to encounter heavy criticism due to privacy concerns). In the future, distributors like Hulu and Amazon might pay browser companies a premium for access to a users’ profile. As an incentive for giving up their private information, users could receive free access to premium content, better suggestions of products or shows, and of course fewer actual advertisements: “send us your profile to watch this program with only one commercial interruption (if you stay anonymous there will be five).”

Whatever becomes the dominant system for collecting and organizing data, there’s no arguing that the online medium is already providing valuable new feedback that companies can use to more effectively develop and market their products. At the same time, this surplus of information and predictability can be detrimental to creative innovation. A recent article in GQ “The Day the Movies Died” caused quite a commotion because it finally provides a depressingly informative explanation of what many had noticed but few really understood: “why is Hollywood putting out so many remakes lately?”

Throughout the article, Mark Harris explains that lately Hollywood studios have largely come to rely on a single formula: pick a successful existing product and turn it into a film. Which explains why this season we will see “...four adaptations of comic books. One prequel to an adaptation of a comic book. One sequel to a sequel to a movie based on a toy...” etc. Studio executives have found that the safest strategy is to market something that’s already familiar to the audience. Apparently it has gotten to the point that even an original smash hit like “Inception” gets written off as a statistical anomaly, a mistake, a glitch in the formula. The problem with this line of reasoning is that, while safe and profitable, it cannot account for innovation and thus leads to creative stagnation. Consistency might not sound like such a tragedy if the products are tires and hamburgers, but if when it comes to things like movies, music, and games- arguably our most popular modern art forms- this halt of progress is a very troubling matter. As user profiling, prediction algorithms, and neuromarketing become more accessible and widespread, it seems that companies will face the difficult responsibility of striking a balance between safe formulas and unpredictable new ideas. We can only hope that great original content has a place in this model.

Sunday, August 15, 2010

Updates on voice analysis, etc.

"Stress detector can hear it in your voice"
Normally we have full control over our vocal muscles and change their position to create different intonations, says Yin. "But when stressed, we lose control of the position of the speech muscles," and our speech becomes more monotone, he says.

Yin tested his stress detector in a call centre to identify which interviewees were more relaxed during recruitment tests. The number of new staff that left after three months subsequently fell from 18 per cent to 12 per cent, he claims. The detector was shown at trade show CeBIT Australia in May.

"Innovation: Google may know your desires before you do"
In future, your Google account may be allowed, under some as-yet-unidentified privacy policy, to know a whole lot about your life and the lives of those close to you. It will know birthdays and anniversaries, consumer gadget preferences, preferred hobbies and pastimes, even favourite foods. It will also know where you are, and be able to get in touch with your local stores via their websites.

Singhal says that could make life a lot easier. For instance, he imagines his wife's birthday is coming up. If he has signed up to the searching-without-searching algorithm (I'll call it "SWS" for now), it sees the event on the horizon and alerts him – as a calendar function can now. But the software then reads his wife's consumer preferences file and checks the real-time Twitter and Facebook feeds that Google now indexes for the latest buzz products that are likely to appeal to her.

"Roila: a spoken language for robots"
The Netherlands' Eindhoven University of Technology is developing ROILA, a spoken language designed to be easily understandable by robots.

The number of robots in our society is increasing rapidly. The number of service robots that interact with everyday people already outnumbers industrial robots. The easiest way to communicate with these service robots, such as Roomba or Nao, would be natural speech. But current speech recognition technology has not reached a level yet at which it would be easy to use. Often robots misunderstand words or are not able to make sense of them. Some researchers argue that speech recognition will never reach the level of humans.

I talked about this earlier in the post about machine translation: the reason it sucks is because people never speak clearly and use slang, etc. but if it becomes common place, as it learns to understand slang, we'll also understand how to speak in a way that's easy for the machine to understand and/or translate.

"Speech-to-Speech Android App"

"See what Google knows about your social circle"

Google started including "your social circle" in its search results earlier this year. Ever wonder how Google knows who you know? Wonder no more, as the Mountain View firm offers a page explaining exactly how inter-connected your online life really is.

The link below leads you to a page where Google explains the three levels of contact it can trace between you and other people, with the depth depending on whether you've filled out a Google Profile and how busy you are on Google services like Chat and Reader. You'll see your "direct connections" through Chat and other contact-creating apps, direct connections from sites you've linked to in your profile (including those you follow on services like Twitter), and those friends-of-a-friend through your direct connections.

"Google working on voice recognition for all browsers"
In some ways it seemed inevitable, but in other ways, it's still an awesome idea. InfoWorld reports that Google is building speech recognition technologies for browsers, and not just their own Chrome—all browsers, as an "industry standard." Beyond making certain searches easy to fire off with a spoken phrase, voice recognition might also give the web a whole new class of webapps that listen for audio cues. Do you want your browser to understand what you're telling it? Or is the keyboard still your preferred lingua franca for non-mobile browsing? [InfoWorld]

Thursday, June 17, 2010

Just in Case...


More -

If anyone's reading this and hasn't watched "More" by director Mark Osborne, do yourself a favor and give it a whirl- it's only 6 minutes. (When this link is no longer good, you can probably find it on youtube.)

It's one of those films that becomes more and more relevant every day. Especially amazing is the fact that in 1998, he basically spelled out all the happiness and woe that's still about 15 years out when AR glasses really take off. Of course scientist and writers had been speculating about similar concepts since the 40's but for me Osborne really captures it in all its corporate, florescent light- "packaged BLISS" - eventually highlighting the fact that people can even tell something's wrong in the way their view of the world has changed, but alas...

Project google goggles is called google GOGGLES for a reason.

Fountain of Youth



NewScientist has a nice article talking about the possibility of immortal life through an avatar by digitally capturing your personality. Like we've talked about before, this would be viewed by your descendants or loved ones after your death in order to give them a momentary sense of comfort/respect.

"Ultimately, however, they aim to create a personalised, conscious avatar embodied in a robot - effectively enabling you, or some semblance of you, to achieve immortality. "If you can upload yourself into this digital form, it could live forever," says Nick Mayer of Lifenaut, a US company that is exploring ways to build lifelike avatars. "It really is a way of avoiding death."

...Like many people, I have often dreamed of having a clone: an alternative self that could share my workload, give me more leisure time and perhaps provide me with a way to live longer.

How my avatar looks may in the end matter less than its behaviour, according to researchers at the University of Central Florida in Orlando and the University of Illinois in Chicago. Since 2007, they have been collaborating on Project Lifelike, which aims to create a realistic avatar of Alexander Schwarzkopf, former director of the US National Science Foundation.

They showed around 1000 students videos and photos of Schwarzkopf, along with prototype avatars, and used the feedback to try to work out what features of a person people pay most attention to. They conclude that focusing on the idiosyncratic movements that make a person unique is more important than creating a lifelike image. "It might be how they cock their head when they speak or how they arch an eyebrow," says Steve Jones of the University of Illinois.

Equally important is ensuring that these movements appear in the correct context. To do this, Jones's team has been trying to link contextual markers like specific words or phrases with movements of the head, to indicate that the avatar is listening, for example. "If an avatar is listening to you tell a sad story, what you want to see is some empathy," says Jones, though he admits they haven't cracked this yet.

The next challenge is to make an avatar converse like a human. At the moment the most lifelike behaviour comes from chatbots, software that can analyse the context of a conversation and produce intelligent-sounding responses as if it is thinking. Lifenaut goes one step further by tailoring the chatbot software to an individual. According to Rollo Carpenter of artificial intelligence (AI) company Icogno in Exeter, UK, this is about the limit of what's possible at the moment, a software replica that is "not going to be self-aware or equivalent to you, but is one which other people could hold a conversation with and for a few moments at least believe that there was a part of you in there".

...Lifenaut's avatar might appear to respond like a human, but how do you get it to resemble you? The only way is to teach it about yourself. This personality upload is a laborious process. The first stage involves rating some 480 statements such as "I like to please others" and "I sympathise with the homeless", according to how accurately they reflect my feelings.

...One alternative would be to automatically capture information about your daily life and feed it directly into an avatar. "Lifeloggers" such as Microsoft researcher Gordon Bell are already doing this to some extent, by wearing a portable camera that records large portions of their lives on film.

A team led by Nigel Shadbolt at the University of Southampton, UK, is trying to improve on this by developing software that can combine digital images taken throughout the day with information from your diary, social networking sites you have visited, and GPS recordings of your location. Other researchers are considering integrating physiological data like heart rate to provide basic emotional context. To date, however, there has been little effort to combine all this into anything resembling an avatar. We're still some way off creating an accurate replica of an individual, says Shadbolt. "I'm sure we could create a software agent with attitude, but whether it's my attitude seems to be very doubtful," he says."

Monday, May 31, 2010

iDollators


While surfing the wrong parts of the internet again, I've stumbled into some research being done for lifesize sex dolls. In case anyone is still in the clear, please make yourself feel at home here in the gutter: http://video.google.com/videoplay?docid=-7277801797935788405#

So these things are slowly gaining the ability to walk/talk in very rudimentary, creepy style, still very much stuck within the uncanny valley. Safely assuming that technology for voice recognition, speech synthesis, emotion detection, facial replication, etc. progress at a realistic rate, at some point we'll have something convincing enough that it will start to create quite a few social problems.

On a somewhat related note, here's this video from a while back showing research being done in human-humanoid interaction. Since things will progress, imagine the humanoid as a sexy fembot and the human with sleek, unobtrusive HUD glasses.

Innovation Blues

So I attended this "TechCrunch: Disrupt" event last week by agreeing to work for free in exchange for a ticket (usually $3k or something ridiculous).

Disappointment.

Out of 100 start ups, I'd say 70% were something completely mundane: "so with youtube and other current video hosting sites, you're required to convert to certain formats and limit your videos to a certain length- us? No limits."

27%, including the company I was working for, were relatively interesting but at least a year late and in no way revolutionary- basically just slightly more efficient combinations of pre-existing Ideas: "alright, this is like youtube but it's mobile, geotagged and social." etc.

The remaining three companies were actually interesting. One was called uJam, which is some sort of app where you can sing a song and then hear it orchestrated with background music. When I stumbled upon their exhibit I was actually a little pissed because I felt like they had stolen my idea from a while ago: "In another situation we're in a group hanging out on the street. Our devices know we're together talking. Suddenly one of the more inebriated amongst us breaks out into song- a drunken rendition of the latest top 40 hit. His device quickly runs a song recognition on what he's singing to identify a possible match, based on what it knows he's listened to lately and in the past [remember, it's hearing what he hears on a daily basis, keeping track the whole time]. Before he's hit the second chorus, it's figured out that he's quoting the latest T-Pain song, although a bit too slow, out of tune and in a different key. Nonetheless, like any good accompanist, the machine tries to follow his singing- it tries to make him sound as good as possible. To accomplish this, it transposes into the tempo and key he's set.
As this happens, everyone in the group hears an accompanying melody fade into what he's singing in real time. Like a live musical or a constant karaoke machine, this device adds acoustic background to whatever it hears. Life becomes a movie as simply hanging out with friends takes on cinematic effects. "


But apparently Bill Gates had said it over a decade ago:"And make no mistake, there will be great applications of all kinds on the Internet - much better and far more plentiful than the ones available today. Many of tomorrow's net applications will be purely for fun, as they are today. ... You might hum a little tune of your own into a microphone and then play it back to hear what it could sound like if it were orchestrated or performed by a rock group."

Bill Gates - The Road Ahead, 1996

and of course people have been dreaming about some sort of "magic harmonizer machine" for ages now, so...


"What has been done will be done again. There is nothing new under the sun"

Ecclesiastes 1:9


Wednesday, May 12, 2010

Emotion Detection Through Voice Making Progress


Computer Software Decodes Emotions Over the Phone

from Discovery News
"THE GIST
  • A company called eXaudios has developed software that detects emotions during a phone call.
  • The program is currently used by companies to assist customer service agents.
  • The versatile software could even soon diagnose Parkinson's disease, schizophrenia and even cancer."

As the computer becomes better at recognizing our moods, it becomes better at positively or negatively changing them. From my post about the digital secretary which I think really benefits from being placed in the context of this software:

"But let's go further and a little bit darker. If we improve CGI and voice simulation, there will be no reason not to have this secretary actually appear as a simulated friend- one who knows what will make you happy depending on your mood, one who won't mind giving you perpetually undivided attention. If it's linked to your cellphone, this friend could also fill you in on things and give you advice:

"you know you seem a little down today- why not try calling Robert or Jessica- you haven't seen them in a while, and last time you hung out you all had a great time." (it was listening to the quality of your voices and watching everyone's facial expressions)

you: "I dunno, what about Eryka, she seems pretty cool... what are the chances that she's into me?"

computer: "approximately 3,720 to 1"

you: "damn"


Even further and much darker, what if we allowed our secretaries to communicate with one another, say even temporarily like at a party. They would watch everyone's interactions, occasionally chiming in to suggest mingling (think a futuristic version of facebook suggesting that we help people become more social). Toward the end of the night we could begin some sort of Game Theory algorithm where each of the secretary agents would trade data until coming up with a "greatest possible universal happiness" formula which would pair those of us wanting to go home or to the next party with someone with each other in the most efficient way. And even if we don't agree to trade data, people will each use their own gathered information to see who they might have a shot with as things are winding down. Of course young people will also learn how to fool the system (like learning to pass a lie detector test) which would have certain advantages. As computer-aided social interaction becomes the norm, how will everything change?"