Seeing, believing, and computing

Seeing, believing, and computing


[music] My name is Garrett Souza. I’m a Course 6-3 computer science major. I’m doing a project analyzing affect of visual
media on implicit biases. I’m involved in the fashion magazine on campus. I’m a very visual learner and I’ve been more
and more interested in how people are actually perceiving the images I take and also how
I’m consuming images. I think nowadays we have Instagram and Facebook
and we’re seeing thousands of photos a week, and I was wondering, “How is that affecting
me implicitly?”, “How has me looking at twenty images portraying someone that looks like
them in a certain way, has that impacted how I then interact with this person in real life?”,
and my guess is yes, but that’s what I really wanted to study. Affective computing is really geared towards how people can naturally engage with computers. Any device that can measure some sort of affect
or emotional response. So we use cameras that can detect micro expressions
in the face, that can signal happiness, sadness, fear. And then there’s also wrist sensors that can
detect electro dermal activity that will signal stress and anxiety. And then also eye tracking, so where are you
looking on a screen? What parts of an image are specifically piquing
your interest? Elections, for example; when a news outlet
is reporting a specific political candidate, what are the images they are choosing to broadcast
that candidate’s position, and how are those images biased based on the news outlet’s own
preferences? And then how is that impacting the millions
of people that are watching? The ideal outcome of this project is to have
a quantitative analysis of, “If this is how the media media you
consume on a day-to-day basis, this is actually impacting you.” I think that would lead to a lot of people being
more cognizant of what they’re consuming, and also what people are producing. My hope is that this computation thinking
is used keeping in mind the implications of technologies that are being created, keeping in mind the biases within the new approaches. It’s trained on faces, that’s how we generate
neural networks. And if those faces are all white or caucasian
it’s so much worse at detecting faces of darker-skin people. How is that impacting the neural networks
of face-tracking software that the government is using in web-cameras across our nation? Are there biases present in that? It’s a really satisfying feeling when talk
to someone and you fundamentally believe that they, not like agree with you, not anything,
but that they just understand you. And that you aren’t misconstruing our words,
because even language is so hard. It’s so hard to actually say what you’re trying
to say. “An image is worth a thousand words”, or whatever. It’s such a cliche statement, but I think
it has some merit in that visuals are so much easier to have a broad robust array of things
that you can interpret and convey. It’s really hard to go through life
if you feel like you’re not really being seen. And I think that’s where the desire for creativity
comes in. At least for me. [music] [music and background chatter]

9 thoughts to “Seeing, believing, and computing”

  1. How we perceive an image depends on our thinking, really. Never gave it a thought that it would be the other way round. Interesting…

Leave a Reply

Your email address will not be published. Required fields are marked *