Skip to main content
MIT Corporate Relations
MIT Corporate Relations
Search
×
Read
Watch
Attend
About
Connect
MIT Startup Exchange
Search
Sign-In
Register
Search
×
MIT ILP Home
Read
Faculty Features
Research
News
Watch
Attend
Conferences
Webinars
Learning Opportunities
About
Membership
Staff
For Faculty
Connect
Faculty/Researchers
Program Directors
MIT Startup Exchange
User Menu and Search
Search
Sign-In
Register
MIT ILP Home
Toggle menu
Search
Sign-in
Register
Read
Faculty Features
Research
News
Watch
Attend
Conferences
Webinars
Learning Opportunities
About
Membership
Staff
For Faculty
Connect
Faculty/Researchers
Program Directors
MIT Startup Exchange
2.23.21-AI-Isola
Conference Video
|
Duration: 16:58
February 23, 2021
View this past event
Preview
2.23.21-AI-Isola
Please
login
to view this video.
Video details
The last few years have seen an explosion of powerful generative models -- models that can synthesize fake faces, landscapes, text, audio and more. The results are fascinatingly realistic, but it's not immediately clear what they are useful for. We already have billions of images of faces, why do we need a model to make more? I will argue that the real power of these models is not their ability to make random fake data but that they make a new kind of data: data that comes bundled up with controllable latent variables. I will focus on deep generative models of images, which synthesize a photo given an input vector of latent variables. The latent variables are knobs that control what the output will look like: a user can tune them to change the lighting conditions in a photo, rotate objects, add or remove elements of a scene, and much more. I will show applications in image editing and scientific data visualization, and I will suggest that this new kind of data, sampled from deep generative models, can be thought of as data++: it looks just like regular data, but comes with extra functionality.
Locked Interactive transcript
Please
login
to view this video.
Video details
The last few years have seen an explosion of powerful generative models -- models that can synthesize fake faces, landscapes, text, audio and more. The results are fascinatingly realistic, but it's not immediately clear what they are useful for. We already have billions of images of faces, why do we need a model to make more? I will argue that the real power of these models is not their ability to make random fake data but that they make a new kind of data: data that comes bundled up with controllable latent variables. I will focus on deep generative models of images, which synthesize a photo given an input vector of latent variables. The latent variables are knobs that control what the output will look like: a user can tune them to change the lighting conditions in a photo, rotate objects, add or remove elements of a scene, and much more. I will show applications in image editing and scientific data visualization, and I will suggest that this new kind of data, sampled from deep generative models, can be thought of as data++: it looks just like regular data, but comes with extra functionality.
Locked Interactive transcript
More Videos From This Event
See all videos
February 2021
|
Conference Video
2.23.21-AI-Solar-Lezama
Towards AI that Learns to Write Code
February 2021
|
Conference Video
2.23.21-AI-Yildiz
Electrochemical artificial synapses for brain-inspired computing
February 2021
|
Conference Video
2.23.21-AI-Autonomy-Startups
MIT Startup Exchange Lightning Talks
February 2021
|
Conference Video
2.23.21-AI-Fan
Building Dependable and Verifiable Autonomous Systems
February 2021
|
Conference Video
2.23.21-AI-Roy
Autonomous Flight in Urban environments: Challenges for Perception and Planning
February 2021
|
Conference Video
2.23.21-AI-Benjamin
The MIT MOOS-IvP Open Source Marine Autonomy Project