Facebook’s Trust Crisis & The Dawn of AI (w/ Roger McNamee) | Real Vision

Facebook’s Trust Crisis & The Dawn of AI (w/ Roger McNamee) | Real Vision


[MUSIC PLAYING]
The reason I wrote the book was to give people hopefully a much better understanding of why
they should be concerned now, why they really need to get out of their chair and just take
control. [MUSIC PLAYING]
Let’s remember that Facebook is the greatest advertising platform ever created. They essentially have everybody with any disposable
income in one network with an almost perfect high resolution view of each one of them,
with emotional triggers and all these other things. You know their birthday. You know where they live, everything. You’ve got their credit card information. You’ve got all their location information,
because you buy it from a cellular company. So you know everything there is to know. The targeting is magnificent. That’s why the numbers are still great. But the other thing you can see is that they’re
having to load a lot more ads into people’s news feed. In my case, every fifth or sixth post is an
ad now, and that’s probably doubled in the last year. And that’s because usage is declining. They say that the number of members is about
steady, but Nielsen says that the minutes of use are down, I don’t know, 20% or 25%. So the present is very strong for them fundamentally. They’re not going to lose the ability to be
a great advertising platform relative to newspapers, or television, or magazines. But if they lose the trust of people, they’re
going to lose their attention. And if they lose their attention, then the
future for Facebook, and Google, and Instagram is not going to be as good. We’re still early in that process. I don’t think that’s going to happen anytime
soon. And the reason I wrote the book was to give
people hopefully a much better understanding of why they should be concerned now, why they
really need to get out of their chair and just take control. When I look at what’s coming in tech, these
smart devices– Alexa-based, Google Home based– are going to come in a gazillion categories. And they may fill the hole left by the peak
and now decline of smartphones. So that at one level is a really exciting
category. What I would like to see there is a set of
rules that just determine what kind of data can you gather, what do you do with it, and
what do you have to do to protect against hacking? There was a story just last week that Google’s
Nest division, which has both thermostats and security systems, somebody hacked up a
nest device and convinced people that there was a nuclear missile coming their way. Well, I mean nobody got hurt, but that could
have been a disaster. So you have the worry of data leakage. You have the worry of hacking for all those
devices. We’re about to repeat the same mistakes we
made with Facebook and Google. And then you’ve got the whole issue with AI. AI may be the single most promising thing
to come along since the microprocessor. And it should make the world a lot better
place. But the early approaches of it, like I described
before, they ship the product as soon as they can get it to work without really thinking
through the negatives. And if you look at the early applications
in real estate, like the mortgages, there’s this concept in real estate lending called
redlining– where they would not let people of certain religions or races into certain
neighborhoods. Well, the people who made those AI’s trained
the AI’s with the data from the real world, and didn’t correct for those implicit biases. So they created a black box that has all the
flaws of the old world and none of the benefits. Because how do you challenge an AI? You can’t figure out how it made the decision. You can’t appeal it. It’s just final. Well, that’s terrible. And it’s totally unnecessary. Same thing is true in jobs. These AI’s that read resonates inherited gender
bias and racial bias. Well, seriously, I think with AI, you have
to treat it like it’s a new pharmaceutical. You’ve got to have proof of safety, efficacy,
and incrementally, you’ve got to get rid of implicit bias. The good news, you’re not going to spend 10
years in a clinical trial. We’re going to create standardized software
modules that are embedded in every AI. We’re going to create standardized data sets
for testing for implicit bias. We’re going to plant everything. And it might take a year or to develop those
things, but then you have a standard that everybody can use. And everything’s better. That’s what we did in chemicals. That’s why you can have really dangerous chemicals
and not have to worry when you go outside, because we’ve got rules. And I think you have to protect society that
way. So I think the future for tech is bright on
opportunity. But it’s so pervasive and so important in
life, that we have to start to subjected to the kind of big boy rules that you apply to
every important industry. This whole thing that boys will be boys. OK. So they blow up a country? No problem? I don’t see that working going forward. [MUSIC PLAYING]

5 thoughts to “Facebook’s Trust Crisis & The Dawn of AI (w/ Roger McNamee) | Real Vision”

  1. studies have shown facebook users suffer from depression: social media use is not healthy, and as time goes on more and more people will realize that once you get off all social media your happiness goes way up. Facebook is the new tobacco.

  2. AI is nothing like the microprocessor and we are far from even having a basic AI. This buzzword has really confused a lot of older investors.

Leave a Reply

Your email address will not be published. Required fields are marked *