Computing: Reflections and the Path Forward

Computing: Reflections and the Path Forward


I’m delighted to introduce
the first technical session of the day, whose focus is MIT’s
rich history of innovations in computation,
and how that sets the stage for possible
paths forward. During this session, we’ll hear
from Sherry Turkle, the Abby Rockefeller Mauze Professor
of Social Studies of Science and Technology, Sir Tim
Berners-Lee, the 3Com Founders Professor of Engineering,
and Patrick Winston, the Ford Professor of Artificial
Intelligence and Computer Science. They will share their
perspectives on MIT’s role in advancing the frontiers
of computer science research and education, in
scaling access to information and computation globally, and in
considering the social impacts of computation. To help set the
stage, let me give you five very quick examples
of MIT’s wonderful history in computing. The pioneers of
artificial intelligence included Marvin Minsky and
John McCarthy, key players in the 1956 Dartmouth Conference
that created the field. Both served as MIT faculty. And McCarthy, of course,
later moved to Stanford, to launch their AI efforts. Public-key encryption
underlies much of modern secure
communication on the internet. It was pioneered, in part,
by Ron Rivest, Adi Shamir, and Len Adleman, or RSA– all at MIT at the time. And others, of course, also
contributed to this area, including MIT
alumnus Whit Diffie. Starting in the 1960s,
MIT fostered a wide range of fundamental research in
human and computer vision, manipulation, and locomotion. This led, among
others, to the creation of iRobot, Boston Dynamics,
Mobil-I, and SenseTime. In 1983, jointly
with DEC and IBM, MIT launched Project Athena. It provided access to
computing resources for every student at MIT. And it led to the creation
of Cerberus, the X Windows system, and Zephyr, an early
instant messaging system. And an Education For All,
I’ve had the privilege of working with John
Guttag and Ana Bell, to create a MOOC on
computational thinking– which has had 1.2 million
learners around the world. These are just a few examples of
MIT’s influence in computation, and its impact on the world. And later today, you’ll hear
examples of current innovations in computation, in such diverse
areas as biology, medicine, economics, design, urban
planning, finance, and others. So let’s get the day started. [APPLAUSE] Good morning, I’m Sherry Turkle. I will be talking about
a critique of the idea of the friction free. Most of us here
today were introduced to the idea of the friction
free as a really good thing. It’s an aesthetic of
engineering efficiency, so why shouldn’t it be
a really good thing? But technology is the
architect of our intimacies. Technology shapes our ways of
thinking about social life, about politics. It shapes our ways of thinking
even about ourselves– about the self itself. So this idea that
technical things should be smooth
and easy blends into and bleeds into other domains. Efficiency becomes
aspirational– in politics, in
business, in education, and in our thinking
about relationships. And that’s the kind of
thing that I study here in my MIT career. In my own research on
technology and people, I see hopes for the
friction free pop up– when humans of all ages tell me
why they prefer, for example, to text rather than talk. Why they would rather send
an email to a colleague just in the next cubicle,
or in the next office. Or why they would rather
text rather than talk to their spouse than have
a face-to-face conversation is usually tied up with a hope
for greater efficiency and less vulnerability. That’s friction free. Now, artificial intelligence,
perhaps without meaning to, has become deeply
woven into this story. Why? Because artificial
intelligence is almost definitionally about
the promise of efficiency without vulnerability– or, increasingly,
about the illusion of companionship without the
demands of friendship. But by trying to move ahead
toward the friction free, we are getting ourselves into
all kinds of new trouble. But here, I get ahead
of myself, and let me backtrack just for a moment. The idea of the friction free
has particular meaning for me, because I’m a member
of a generation that sold it to the world. So I want to begin by
talking about my generation. And I’m the Harvard
class of ’69. Don’t boo, don’t boo. And we’re about to have our
50th college reunion in June. And since the 2016
election, I’ve been studying how my class– that class of ’69– that famous class of ’69– thinks about our choices. And I’ve discovered
that as we look back many of us had the
expectation that for us things should be easy, including
our political activism. Now, why did we think that
things should be easy? In my interviews, I hear that we
were the children of those who had triumphed over fascism– in some case, over the
threat of extermination. My parents, for example, told
me that they had saved the world so that I wouldn’t have to. And we were supposed
to have an easier life. An easier life. Well, the Vietnam era,
that wasn’t so easy. But after the war
was over, my cohort was quick to declare victory. We inserted ourselves into
a narrative of progress, technology, and efficiency. We even had new ideas about
making politics efficient. Consultants– we would outsource
activism to professionals. This idea of efficiency
shaped our worldview. And we shaped a world in which
when people look for solutions the first look is often to the
efficiencies of technology. Indeed, my generation
made our love of the digital technology
that came of age with us central to our identity. This new digital world,
infused with our values, had a distinctive aesthetic. And I have said what it is. The difficult will
it be made easy. The rough will become smooth. That which had friction
will become friction free. That digital world we would
give to ourselves and we would give to our children. And this new world that
the computer augured wouldn’t just be friction
free in the sense that economic transactions
would go more smoothly– helped by such things as
electronic-funds transfer. No, this vision was to
minimize and even eliminate social friction, as well– interactions that might
cause emotional stress. In one often cited
near-future scenario, that is near self-parody, but
which is actually partially translated into an actual app
that you can put on your phone, you order a beverage
on your phone– your mochaccino, cappuccino,
whatever you kind of want– and you send it to your
favorite coffee shop. And as you walk to pick it
up, an app on your phone routes you so that you avoid
your ex spouse, or anybody you are having an argument
with– your department chairman who you’re not in
a good place with. And you only passed
your friends. It’s like the Marauders
Map in Hogwarts. It’s a Harry Potter thing. And it prevents you from
seeing any of these people with whom you might
have any– and there’s the word– friction. But who said that a
life without conflict, without dealing with
the past, or rubbing up against troublesome people
makes for the good life? Well, we did. We did. And you can see the fit between
my generation’s aesthetic of easy and what’s possible
in the world of apps. But there was also
considerable tension. Because in many cases,
life was teaching us one thing and technology
was teaching us another. Let me go through some examples. Life taught, for example, that
political organizing was hard. The internet made it more
convenient, but less effective Face-to-face conversation taught
that when we stumble and lose our words, it’s painful, but we
reveal ourselves to each other. Screen life allowed us
to edit our thoughts, never be interrupted,
and broadcast at will. We preached authenticity, but
we practiced self curation. We preached authenticity, but
we practiced self curation. Technology encouraged us to
forget what we knew about life. And we made a digital
world, where we could forget what life was teaching us. My generation infused
digital technology with our value of easy. But here is my call to arms,
after a professional lifetime here at MIT studying
this technology. It’s time to associate the
digital with other values than the value of easy. Let’s say, the opposite of easy. And it’s time to remember
that the opposite of easy is not just difficult.
The opposite of easy is also evoked by words
such as complex, involved, and demanding. That’s what digital
culture demands of us now. It’s time to reclaim our
attention, our solitude, our privacy, and our democracy. We have time to make
the corrections– not much time, but we have time. And to remember who we
are– creatures of history, of deep psychology, of
complex relationships, that intrinsically generate friction
as they are worked out. Why is that? Friction means being authentic. Friction means
being vulnerable– putting yourself in the
place of another person, with all of the conflict
that can bring– including inner conflict,
that needs to be faced. How should we take
these insights into our thinking
about the new college? First, consider the idea
of unintended consequences. For years I’ve written
about technology’s unintended consequences. And that narrative no
longer fits the known facts. We now introduce technology
with consequences, that we can see straight off– with consequences
that are intended. We knowingly put in place
technology that will spy on us, use our lives as data– for the purposes and the
profit of corporations, political parties,
governments– anyone really, that can profit from what we
say, see, or watch online. Computer counselors–
this is a subject very close to my heart–
computer counselors, in the role of psychotherapists,
are put in place to simulate the feeling
of human understanding where there is none. Technology is becoming an
intentional participant in what I call an assault on empathy. Making this step from seeing
technology’s effects as unintended to intended wakes
us up to our responsibility as citizens, as consumers,
and frankly, as humans. Second, get responsible
about social media. Around 10 years ago,
when Facebook was just coming into high schools, I
began interviewing students about their attitudes
about privacy. One young woman, an early
Facebook enthusiast, told me she wasn’t
much concerned. And she said to
me, who would care about me and my little life? And it was a good question. And here’s the answer. In the current corporate
regime, when we go online, our little lives
are bought and sold in bits and pieces to
the highest bidder, and for any purpose. When I wrote about
that interview in Alone Together in 2012, I asked
whether we could have intimacy without privacy, and whether
we could have democracy without privacy. And I argued that,
no, we could not. But here’s the thing. When I considered
those questions, I thought about those
two problems separately. I thought about those
two problems separately. I had a lot to learn. The social-media
business model evolved, to sell our privacy in ways that
have fractured our democracy. All of this unfolded
in plain sight. But here’s what I’ve
learned in my studies. Even after we could see it
unfolding in plain sight, we didn’t want to see it. We had a love affair with
technology that seemed magical. And like all magic it worked
by commanding our attention, so that we took our eyes off
what was actually going on. But here we are
today in a new place and with a mandate
to pay attention. We can no longer
say, who will care about us and our little lives? Now the question is,
how much do we care? We have to face not
only the question, how does technology
impact society, but another question
more difficult to deal with but always adjacent to it– how does society
impact technology? Because technology is
animated by money and power, by social values and
social blindness. Once you look for it, you
see society and technology everywhere. If a program to decide who gets
a mortgage sees mostly white faces, because mostly white
faces have received mortgages in the past, the
program will be more likely to say that white
faces should get mortgages. Society in technology. If more white people get bail, a
program trained in that culture will suggest bail
for white people. Society in technology. These examples have
become well known. But they are good to think with
because they illustrate why AI scientists need to
be trained in a new, digitally-sophisticated
sociology of knowledge– because social relations will
always become embodied in code. So we have to live in
our technological world, but remember what we know
about life and the life we want to live. We have to work
on the real world as hard as we work
on our technology. We can’t just work
on our technology and hope it fixes
the real world. That’s how I see the mandate
of this new institution. Thank you very much. [APPLAUSE] Well, thank you. Thank you for setting up,
for defining the context. As we go, it’s an exciting time. It’s a positive time– a
time of hope– and to be starting a new college. But anywhere where you
talk about technology, you talk about
computing out there– not just to experts, like you’ve
seen, but people on the street, in general– when you ask them about the web,
about technology in general, people are very concerned,
people are very skeptical. There is much more of a concern
bouncing all the utopia– which we started off
with 30 years ago– than there’s ever been before. And for very good reasons. And some of those
have been outlined. Let me say that, for
me, looking back– it was 30 years. We say that the
birth of the web was when I wrote the first memo,
and that was March 12, 1989. When in 1989, I was
looking back at 20 years of internet development,
sitting there in CERN in Geneva, and decided that we needed
to have a global information space. It would be really cool. I needed it. It should be a
collaborative medium. And I looked at the technology
which was available. And I looked at the capacity
of computers and programming languages, and computer
protocols and so on, and put together
the world wide web. But back then, for
those of you who have enough gray hairs to
remember what it was like, then was a sort of
fashion for cyber utopia. John Perry Barlow had
written a manifesto for cyberspace that
basically said, guys, we won’t need all your
organizational structures. We won’t need your nations. Because when we connect
in the cyber world, we will connect just as
individuals, as peers. There will be peace and love. And we will organize ourselves
without all this stuff which comes from nations and laws. Because on the web, on the
internet there were no nations. And in fact, it’s true. When I started off, I sat
down and plugged my computer into the internet
in CERN in Geneva. And nobody coming to the very
first web server that I built, with the installations of
the very first web browsers, had any idea that I
was in Switzerland. And when they made a blog and
stored it somewhere on the web, they had no idea. And they didn’t care about
the international border. So you would have been
forgiven for imagining that we could go down a path
where we end up producing very much stronger
social structures, very much stronger democracies. For democracies and things,
we looked at the blogosphere. And, initially, when people
blogged on the internet there was something very,
very positive about it, that they felt that they were
both choosing their words so as to get more and more readers. And they were
choosing the things they linked to to only link
to the other blogs, which were as good as they could be. And they found that
within the other bloggers that they discovered
and they linked to, and who linked to them,
there was this feeling that this is great. Because I’m just
writing about the bird that I like, and all the
other bird fanciers are writing about the birds– together, we more or less
have a better online resource about all these different
birds than we’ve ever had– that has ever
been published a book. And things like
Wikipedia, which now is one of the marvels of the web– it’s a brilliant example of
where people work together to tweak the way
the processes work, to tweak the way that you
can complain about things, the way arguments are handled. And the way, eventually,
the community as a whole works towards some idea of
positive absolute truth. And, Wikipedia, that’s been a
great example for the positive. And in fact, for
the first 20 years of that 30 years of the web,
if you’d talk to me about it– if you’d have come
to me and said, Tim, you invented this web thing
and I found some junk on it. You know, there’s all
kinds of bad stuff on it. I would say to you as a
user, just don’t go there. Don’t click on that again. If you click on the
links on that page once, then take it out
of your bookmarks. Nurture your bookmarks to
point you to good things. And people did it. And we all did. And we all ended up with
brilliant experience of life on the web. But, in a way, using a
technique which just in fact wrapped this into what we
now know as a filter bubble. Wraps us into a group of
people who ended up living in a world in which
they all mutually agree about a lot of things. And where we don’t
worry about the fact that there is another
group of people– large-number-of-people
groups– who have not ended up in cycles, and virtuous circles
where they end up producing truth, but have ended up
in vicious circles where they’ve ended up producing
untruth and nastiness. And so the web, we need to
do a mid course correction. I’ve been calling for many
years– more than a decade– for web science. In the sense of a
science like this, we have computer science
to look at the brain. We need web science to
look at this process. The process of web
science involves looking at the way people
interact on the web, and the way organizations
interact on the web. So it’s a very
multidisciplinary thing. So I’ve been calling for
that for a long time. One of the great things
about a college of computing is that it should be
very multidisciplinary. Yes, a huge amount of energy
have to go to computing. But it has to be strictly in a
way that all the other fields– some of which are new
fields completely. But, certainly, all
the ones we know– we have to involve economics. You can’t understand how
the world works without it. Understand economics. You can’t understand how
the web works without it. Psychology– you
can’t understand how the web works
without having people understand how
microscopic systems lead to macroscopic phenomena. So you need physicists. You need climate scientists. Because we have a there
is a climate change which we have spent a long
time looking at, but now we have a social climate change. And the social climate
has changed very much for the worst. Just as the climate in
the world has got hotter, the social climate, a lot of
people feel, has gotten nasty. So we need to use this
college of computing as a very powerful tool,
to bring all of the fields around computing
together with computing, in order to do a reset. Some of you may
have heard– yes, I have a project at MIT called
Solid– so solid-mit.edu– which I found is exciting as
one part of a sort of reboot to the web. It’s a project to use web
technology, but in a way where we reorganize things. We separate the
apps from the data. We say that everybody should
be in complete control of their own data, so that you
get what we call a Solid pod. You have one or two pods– some for home, some for work. They may be out there in a
piece of cloud that you own. Or you may be running it
on a computer at home, if you’re on the geeky side. But wherever you
store your data, the Solid rule is you have
complete control over who and what gets access
to it for what. And so the Solid attitude is use
web technologies but in a way in which we cheat we
flip the world around. People have said, but you’re
turning the whole privacy question upside down. I think it’s more a question
of turning it right side up. We’re building a world
in which individuals own their own data initially. And if anybody else wants to
use it, you have to come to me. It’s an exciting world in
which if I build an app– if I can build it, write
a neat program today. and go to sleep, and
tomorrow find out that people are using
it all over the world, without me having to build it
backend because they are using it with their existing stores. So for me, that’s exciting. It’s the example– having
that project at MIT, it was great to have
colleagues, space, and excitement– and energy,
and review at MIT and CSAIL. I hope that the college
will do many things. Also, we have a startup now. And one of the things I
think it’s great to be at MIT is that MIT famous for
respecting and making it as easy as straightforward
as possible for you to be able to spin these things out
into companies, when you feel that you need to have
a commercial energy behind these things. So the startup is Inrupt. Thanks to Glasswing
for funding it, even though it looks like a
really interesting, different sort of project. And folks, do go to inrupt.com,
or solid.inrupt.com. The hope is that with
Solid this arc of the web, which initially I
thought started off as being a sort of
potentially utopian thing, now seems to be coming over
and potentially heading towards a very
dystopian future– for democracy, for
education, sowing in, where are all of these
things in utopian days that we hoped we were to do
may be severely threatened or already are largely
destroyed– mid course correction. So with projects like Solid
re-decentralizing the web, turning it back into a place
where individual people have a mandate, individual
people have power, then we’re hoping that the
trajectory over the next 10, 20 years will be towards massive
individual empowerment, massive ability of groups to be
able to collaborate and solve the huge problems in the space. And if John Perry
Barlow rolls over in his grave at this
current situation, maybe his spirit will be
happier with us in the future, as we build more and
more really positive, creative, collaborative,
democratic systems on top of this new version of the web. Thank you. [APPLAUSE] Well, I’m glad to be here. I’ve been looking forward
to it for 50 years. [LAUGHTER] [APPLAUSE] When I started thinking
about what to say today, it occurred to me that I
have been around for a while. And maybe it would be
my best contribution if I talked a little
bit about where we are and where we come from–
and where we might go, and how we might get there. In the beginning
I thought, well, I will give a comprehensive
overview of everything. But I decided in the end to
point out that where we are is here at a historic moment– not only for MIT,
but for the world. Because computing
is no longer just for practicing professionals,
it’s for everybody. It’s an important thing
to know about, just like literature and history, and
a little bit of mathematics– and, perhaps, anthropology. So that’s where we are. But where we came from,
that’s impossible. I started thinking about this. And a friend loaned me a chart
from the 25th anniversary of Project MAC. And I thought, well,
let’s see, I’ll just cover those milestones. There were about 300. And extrapolating
to today, I think I would have to talk
about 1,000 things, which would give me a little
less than one second each. So I soon gave up
on that, and decided I would give you a personal
history of computing at MIT– and talk a little bit about, and
focus on the greatest computing innovation of all time. That’s my agenda. So to start, I want to start
on my very first day at MIT. As a freshman, I found
myself wandering around in Building 26, looking for
the lecture hall in which I would learn physics. I find myself looking
in this door– looking in this window. It was the IBM 7090 computer. And, boy, was I impressed. It was inspiring. This was the day when
computers had gravitas. [LAUGHTER] They had blinking lights,
and tape drives spun. It was wonderful. But the amazing thing is
that so many wonderful things were done with that computer. The first great
[? II ?] program was done on that computer,
a program that did symbolic integration,
the same way I was learning to do integration
in my calculus class. And that computer had– this cell phone is 50 to
and 100,000 times faster than that computer. And this computer has about
250,000 times as much memory. So it’s amazing that
anything got done on that. But in any case, it
was still inspiring. Well, a few years later– I think I was a senior– I witnessed a debate between
Seymour Papert and Hubert Dreyfus, in a class
taught by Jerry Lettvin. Lettvin was a character. He announced on the first
day that there would be no quizzes and no homework. Everyone would get
a B, unless they did a term paper,
in which case they would get either an A or a C. [LAUGHTER] Well, then came the debate. Dreyfus, a philosopher, argued
that computers could never be intelligent. And he talked about how it would
be impossible for a computer program to play chess
at a championship level. And he talked about
fringe consciousness, and used a lot of big words. And being young and
impressionable, at the end of his talk, I thought, well,
who will have the courage to debate against this wisdom? It wasn’t a face-to-face debate. Papert came in a
few classes later. And in the meantime,
they had somehow arranged to have a
match between Dreyfus and a chess-playing program
written by Richard Greenblatt. And this game enabled Papert
to start his talk by saying, Dreyfus has said that
computers can’t play chess. And if that’s true, then
Dreyfus can’t play chess either. [LAUGHTER] In any event, I started
hearing about the artificial intelligence laboratory. And it seemed like a place
where fun was going to flourish. It attracted people from the
Teech Model Railroad Club– people like Richard
Greenblatt and Tom Knight, who found that computers
were even more fun than model railroads. So I suppose I was right
when a friend of mine suggested that I might go to
see a lecture by Marvin Minsky, and I did. I didn’t really know
what I was going to do. I had found myself
in graduate school. I didn’t know why. My father had started talking
darkly about law school. And I went to this
lecture by Minsky. And there was such
joy in his talk, and such a pride in what
his students had done– and such a passion for what
would be done in the future. I left the lecture
saying, to my friend, I want to do what he does. And pretty soon, I
was doing what he did. And pretty soon after that, he
was talking about what I did. Here we mgiht give it
a fourth example, this, and say that is an arch. And the description
of this structure agrees with the description
it’s been building up, except for one small detail– the top thing is no longer
a block, it’s a wedge. And the program has to say,
I’ll accept things that are wedges as well as blocks. And that’s pretty easily
changed by saying, this can be block or wedge. Or in the actual program,
it generalizes and says, that can be a prism. Well, the point
of the program is that it doesn’t learn so
much a little bit at a time, as in the traditional
reinforcement theories of learnings– which
work very well for rats and very badly for people. But for each example,
the machine jumps to some sort of conclusion–
learns a new relation. And it can learn very fast. It’s learned a lot
from four examples. On the other hand, it
takes a good teacher. If you gave it
misleading examples where there are many differences
between the things its seen and the new things,
then it will be at sea. There will be a
lot of differences that it could put in here. And it won’t have any
good way of deciding which differences to
represent in its final result. So it’s good to
know, even back then, we were thinking about
a different kind of AI than the kind that’s
popular today. Today what we have is
statistical and perceptual. And that’s complemented
by the things that were happening back then and
should happen in the future– the cognitive and the
thinking part of AI. In any event, I
finished my degree, and a year-or-two
later found myself director of the MIT Artificial
Intelligence Laboratory. There’s controversy about
how that came to be. Some say I had
arranged a coup d’etat. Others say I was
tricked into it. But in any case, Seymour
Papert said, don’t worry, you’ll only have to do
it for a year or two. And it turned out to be 25. Dan, wherever you
are, take care– this could happen again. [LAUGHTER] So a short time later– really, rather at the beginning,
I knew I was young and stupid and didn’t know anything
about running a laboratory. So I went around MIT asking
department heads and laboratory directors how I could make
the artificial intelligence laboratory a great laboratory. And to my surprise, in
my first dozen efforts, no one had any ideas. They hadn’t thought
about the question. So then I thought,
in desperation, I would go to see Jay Forrester. Forrester had built
the Whirlwind computer in the late ’40s and early ’50s. And it was to be a prototype
for the computer that ended up in the SAGE air-defense system. And that was a really
magnificent computer. The relay racks there
were 11-feet tall. They employed hundreds
of people to build it. It was the first computer
with magnetic core memory. It was the fastest computer. And when I went to see
Forrester it was frightening. There was a table with
a white tablecloth, that was set for tea and cookies. Forrester was in an immaculate
suit and well-chosen tie. I wasn’t. [LAUGHTER] And for the first 25 minutes
of our 30-minute interview, he told me why we
should not have an artificial-intelligence
laboratory at MIT. I never did understand why. But, finally, in
desperation, I said, well, Professor
Forrester, it must have been a great
laboratory, because of the excitement
associated with building that wonderful computer. And he looked at me like I
was the king of the fools and said, young man, we weren’t
trying to build a computer, we were trying to protect
the United States against air attack from the Soviet Union. And that had a big effect on me. Because what it told me
is, you don’t become best by wanting to be best. You become best by having a big
mission, and then being best will take care of itself. So there we are. We have inspiration,
courage, joy, and mission. And so it’s natural
to think, well, what should be the mission
in the new college? And, to me, the mission ought
to be to take everything at MIT to another level– not just computing,
but everything. And that ought to
be in the service of an even bigger mission,
which as President Reif said in his inaugural address,
what MIT is about solving the unsolvable,
shaping the future, and serving the
nation and the world. But it isn’t all serious, as
even Forrester pointed out. Listen to this one. And before leaving,
we would like to show you another kind
of mathematical problem, that some of the
boys have worked out in their spare time– in a
less serious vein for Sunday afternoon. [MUSIC – “JINGLE BELLS”] Yeah. So they had fun, too. But you know, did
you see those words? The things that they worked
out on a Sunday afternoon on their spare time. They were spending
a lot of money, and they didn’t
want the taxpayers to think that they were
doing just frivolous things. So what’s left? There is something
that’s left, it has has to do with curiosity. And by curiosity, I don’t
mean just ordinary curiosity. I mean the kind that
leads to great things– that sort of
out-of-control curiosity that led Copernicus
to figuring out where we are in the universe,
and Darwin to figuring out where we are in evolution, and
Franklin, Watson, and Crick figuring out the
nature of our biology. And when you say, well,
what could possibly be next, that brings me back to the
greatest computing innovation of all time. And what’s that? It’s us. We are the greatest computing
innovation of all time. Because nothing else
can think like we think. Chimpanzees can’t do it. Neanderthals can’t do it. And we don’t know how to
make computers do it, yet. But it’s something
we should aspire to. And it’s something that
we’ve been aspiring to for a long time. The Greeks started
thinking about thinking. Alan Turing started
thinking about whether computers could think. And Marvin Minsky
showed us how to do it. But in going forward, I think
we have to go backward too. And not just a little bit– about 75,000 years. That’s when we started thinking. And this is what Ian
Tattersall had to say about it. So what do you mean
by re-combining? Well, as a Berwick
and Chomsky have noted in their seminal
book Why Only Us, it’s all about the ability
to put symbols together to build symbolic descriptions. And once you have
that operation, which they call merge, then
you get to what I call the strong-story hypothesis. Yes, this is a
strong-story hypothesis, that says the way we differ from
other species is in the stories that we tell. And we start with our
stories in childhood. They persist
through high school, and eventually become
the studied stories in areas like these– which happen, of course, to
be the five schools at MIT. [LAUGHTER] So some say that,
if we do all that, we will be partaking of
another forbidden fruit, and that this knowledge will
become an existential threat. I’m more optimistic. I am optimistic. Because, to me, unless we
get hit by an asteroid, our biggest existential
threat is actually us. So I take a more
optimistic view. And I think that,
in the end, there’s no reason why computers
can’t think like we– they can’t be ethical and
moral like we aspire to be. Some say, ethical
and moral as we are? How could it be possible
for a computer to do that? Well, every time I
watch the evening news, I think to myself, it
can’t be that hard. [LAUGHTER] So I don’t know
what others may do. But as for me, what I hope to do
with my friends and colleagues and like-minded people is go
forward into the future with these kinds of ideas
painted on a wall– the desire to put all
those things together to develop a greater
understanding of ourselves and how we think, and
how other people think. And that can’t help
but be a good thing. And that’s the end of
my story for today. But I hope it will
be just the beginning of a story that will be told
in the days and years to come. [APPLAUSE]

3 thoughts to “Computing: Reflections and the Path Forward”

  1. I want to get into MIT
    currently in class 11
    and hey I know about Arduino, java, ide, raspberry, kali linux
    But in India none cares 😕😕

Leave a Reply

Your email address will not be published. Required fields are marked *