4:50 Schillings said that X also strives to quickly shut down projects that aren’t working, but also tries to learn from and preserve key ideas that could be applied to different projects in the future. They refer to it as “compost.”
When a project is shut down, they also try to identify the “cause of death,” noting what went wrong. X periodically collects those together and conducts “tales from the crypt” sessions, during which they review old ideas and ask whether something has changed that might make it a ripe time to revisit them.
4:42 Asked how X’s thinking has shifted since it was founded 12 years ago, Schillings says that they were initially overly focused on radical solutions, “without much rigor on, ‘would this really work?’” They’re still striving to preserve high levels of creativity, but have become more demanding on “key criteria and evidence that an idea is worth investing in.”
4:36 Now we’ll hear from Benoit Schillings, chief technology officer at X, Alphabet’s “moonshot factory,” which helped develop self-driving cars, the Google Brain machine learning tools that power many Google products and numerous other projects.
4:30 Werner acknowledged that financing is tightening in the current economic environment. But she said, “there’s still quite a bit of capital” allocated for companies that could “have a huge global impact” on big challenges like climate change. She added that the recently passed Inflation Reduction Act will also provide vast amounts of funds to support pilot and demonstration projects for emerging clean tech industries in the US.
4:24 Werner says that in selecting startups to support, the Engine looks for companies that could “bring prosperity to the entire world,” and they focus on the people. They try to find and build relationships with leading researchers at the top labs that are trying to solve hard problems across climate change, human health and other categories.
4:03 Hello, and welcome to the final chapter of the final day of EmTech 2022! I’m James Temple, an editor at MIT Technology Review. Our next two speakers will be discussing “New Frontiers in Hard Technology,” in conversation with David Rotman, MIT Technology Review’s editor at large.
First up is Milo Werner, general partner at The Engine, an MIT-backed venture firm focused on “tough tech.”
3:38 That’s all for this session on artificial intelligence! Join us in a bit for our next session on how to tackle the hardest problems in tech.
3:35 An audience member asks about artists who are upset about AI programs generating images that copy the style of real artists.
Stevenson says that, from an artist’s perspective, it may be advantageous to not have only one style anymore, because “anyone who has a single style, they’re the best training a model.”
He wonders whether it would be possible to have artists “opt-out” of allowing their artwork to be consumed by AI training models in the future.
Jang says that OpenAI is in communications with policymakers about how regulations and guidelines should work. “I think the law really needs to catch up to this,” she says.
3.29 Overcoming issues around bias and harm that AI systems learn from large datasets is a “work in progress,” Jang says.
3.24 Will asks: Will these new technologies take away artists’ jobs?
Stevenson, who previously worked at DreamWorks, says we saw the same type of anxiety when computer animation was introduced. There’s first an existential crisis, then artists quickly learn to use the tools and incorporate them into their processes.
“When computers were first announced as a tool for animation in the 90s, lot of people are like, ‘That’s it—my whole job.’”
3.19 Will asks about OpenAI’s new DALL-E API.
Jang says that the company has been careful about the rollout from hundreds of users to now over 3 million users, because of concerns about safety. Engineers have slowly tweaked filters and loosened restrictions.
3.12 Now speaking is XR creator and independent consultant Don Allen Stevenson III. As an artist, he finds programs like DALL-E useful for helping him with certain aspects of his designs.
“I struggle with composition, so it’s really helpful to have something that can help make the framing of it,” Stevenson says. He says DALL-E can help him with storyboarding, 3D modeling, and creating VR avatars.
3:07 On stage now is Chad Nelson, chief creative director at Topgolf/Callaway. At Topgolf, part of designers’ job is to create virtual golf courses.
He says programs like DALL-E are transforming how designers work. It “takes 15 seconds to generate, so it’s changing the way creatives go through their daily process and the whole creative process.”
3.00 Jang says that using DALL-E still requires human originally and creativity. It’s able to “take the salient features of an image and alter certain aspects of it while keeping the core,” for example creating slightly different original images around the same theme.
She describes it as “image search for your imagination.”
Jang announces that, starting today, DALL-E is now available as an API.
2:54 Jang explains that the way DALL-E works is by showing an AI model millions of images, which the model then learns from to generate its own original images, akin to how a child may learn.
“It learns the concepts and how different things can relate to one another, and then creates new images.”
2:48 This next lineup of speakers is going to be talking about generative AI and how to design with AI image generation technologies. On stage now is Joanne Jang, product lead of DALL-E at OpenAI.
2:47 An audience member asks whether engaging deeply with AI helps humans to ask more interesting and deeper questions.
The speakers think so. Building new AI systems using new technologies allows people to consider problems in a new light, Hadsell says.
“Using reinforcement learning we’re able to come up with faster ways to do a really, really fundamental piece of computer science, and I think that that’s really exciting,” she says.
2:39 Will moves on to discussing responsible AI.
LeCun says that AI is part of the solution for making technology more responsible. He gives the example of Facebook’s partially automated content moderation process, which is able to scale up moderation for billions of users. “You cannot do content moderation unless you use AI.”
2:34 Will asks the speakers about the difficulty of building AI in a world where the systems have to interact with humans, who are often unpredictable and irrational.
Hadsell says that AI will get better by interacting more with people, just like what happened with Wikipedia when it first was introduced. Wikipedia illustrates “one way in which technology can really grow in its interaction with people in the world.”
2:26 Hadsell says that robotics, although it may seem like a tangential discipline, will be crucial to unlocking artificial intelligence’s potential.
Robotics is relevant to AI “because we want to do useful things in the physical world,” Llorens adds.
LeCun says that AI has to go beyond preprogrammed behavior. “The mistakes that we see large language models are doing are due to the fact that those systems don’t have any sort of knowledge of the underlying reality that this language expresses.”
2:19 Will asks LeCun: What is AI still unable to do? LeCun says AI systems still are unable to learn the way humans and animals do, by observing the world.
“A big challenge for the next few years is to get self-supervised learning methods that would allow systems to learn everything about the world there is to learn by watching video, essentially.”
2:12 Llorens is impressed by the progress of language models and language processing technology, which has advanced even within the last few years.
“It enables you to interact with machines in a much more intuitive way. We see the ability to take an expression of intent and turn that into a text completion, turn that into an illustration, even a video now.”
2:06 Will asks the panelists, “What is AI?”
The panelists say that the term is broad and has changed a lot. “I find that it’s most helpful to think about AI in terms of its purpose,” Llorens says.
“It’s a moving target,” LeCun says.
Hadsell suggests that it’s a term that’s been diluted by overuse. “I was recently shopping for a refrigerator and was trying to find one they didn’t say that it was AI.”
2:01 On stage is Will Douglas Heaven, senior editor for AI at MIT Technology Review, who will be hosting the speakers for this session.
Joining him is Ashley Llorens, vice president and managing director of Microsoft Research, Yann LeCun, VP and chief AI scientist at Meta, and Raia Hadsell, senior director of research and robotics at DeepMind.
1:50 Nelson says everyone is a designer, whether they realize it or not.
But professional designers are often designing for things that they won’t directly use. For example, when Kyndryl worked with Dow Chemical to design for their engineers, it was important to understand the users’ process. Nelson calls collaborating with a diverse group of people to come up with good design “co-creation.”
1:38 Up next, we’ll be hearing from Sarah B. Nelson, chief design officer at infrastructure service provider Kyndryl. She will be sharing ideas on how to design for human-centered organizations of the future.
1:36 Welcome back to this afternoon’s session of EmTech, where we’ll be learning about what’s next in artificial intelligence. I’m Tammy, a reporting fellow with MIT Technology Review.
12.25 That’s it for this morning’s sessions! We’re going to take an hour’s break for lunch, but will be back afterwards when my colleague Tammy Xu is going to be handling our AI sessions. Catch you later!
12.15 Matsuhisa has been working on stretchable sensors and displays that could be installed on your skin, flexible and lightweight enough to sit undisturbed on your wrist, for example.
He became involved in the field purely out of curiosity he says. The concept of stretchy conductors really appealed to him, and he has since invested a lot of time into developing new, flexible devices. They could prove extremely beneficial in the development of soft robots, he suspects.
12.04 Where is technology heading next? The individuals on our MIT Technology Review’s Innovators Under 35 list, our annual search for the brightest young minds tackling the biggest technological challenges, are dedicated to finding out.
Naoji Matsuhisa is an associate professor at the Institute of Industrial Science at the University of Tokyo, and a recipient of the 2022 Innovator Under 35 list for his work with stretchable electronic materials and devices. He has developed a stretchy diode made of thin rubber sheets to help monitor people’s health more efficiently than more rigid devices, which are poor at maintaining good skin contact.
12.00 There are billions of interactions with Alexa every day, so the quickest way to reach a trillion interactions would be through more devices answering a wider range of requests, Sharma says.
When asked how to build a knowledge graph, he laughs and admits that’s a big question. “That’s actually a pretty big undertaking—a lot of the information in the world contradicts each other,” he says, but acknowledges it’s not all purely technological—on the whole, such large-scale knowledge graphs still rely on humans to keep them accurate and healthy.
11.50 The rise of voice assistants and ambient AI computing has been pretty phenomenal, and helped to usher in new forms of computing that will continue to unfold over decades, says Sharma. “You don’t want this thing to feel like a tool, you want it to feel like it’s part of you,” he says. “When you’re putting on smart glasses, you want to move on from holding a phone in your hand to something that’s new to you.”
11.30 Voice assistants have been one of the biggest consumer tech success stories of the past decade, with 60% of US households believed to currently own a smart speaker. We’re now going to hear from Vishal Sharma, the vice president of Alexa AI at Amazon, which spearheaded the smart speaker sector with the launch of the first Amazon Echo in 2014.
Sharma is going to dig into the future of voice computing and how it’s revolutionizing the way humans interact with their devices, and, crucially, their data.
11.25 Herr lost both legs below the knee following a mountain climbing accident when he was 17, and uses smart prostheses. He says he hopes to go under the knife again in the hopes of getting critical tissue manipulations and implants, in the hopes of making the relationship he has with his prostheses closer to that other people have with their ankles.
“It’s kind of like never being able to drive, like you’re always in the backseat of a car,” he says. “And if someone’s driving, I want my hands on the wheel and actually want to be part of the car.”
11.15 Herr’s lab is focusing on four projects across four years, one of which involves developing an exoskeleton that could help people who have muscular weakness after experiencing a stroke. It can also help people who do not have muscular weakness to run and move more quickly.
Another project revolves around the pursuit of improving the current technology designed to help people with spinal cord injuries. Many people with these types of injuries typically use a wheelchair, while the lucky few who can afford it can use exoskeleton devices with external motors and processors that move in parallel. Herr’s lab wants to activate a person’s existing skeletal muscles by shining light on the skin to activate the nerves underneath.
11.00 We’re back, and next on the agenda is Hugh Herr, a media arts and sciences professor at the MIT Media Lab and co-director at the K. Lisa Yang Center for Bionics. He is a leader in the emerging field of Biomechatronics: technologies designed to accelerate the enhancement of the human body using machinery.
10.28 We’re now going to take a half-an-hour break, and when we come back we’re talking all things body tech. See you on the other side!
10.20 AR works best in environments such as museums, galleries or other cultural institutions when it’s least expected, says Cason. Layering an AR experience over the top of a piece of art that’s already beautiful risks detracting from it, she says, whereas if you put an experience around something like a sign, or a box, people are surprised and delighted to interact with it—which is exactly the kind of reaction its creators want.
A metaverse-style approach to AR is not something she can ever see taking off. “I don’t want to fully invest in an AR or VR world,” she says. “I’m not super-excited about it because AR is for a specific time, it doesn’t work properly when it’s on all the time.”
10.10 AR is a more natural way of interacting with technology than VR, says Murphy, because it tallies with the way we move through the world, including how we move our heads to look around us. “AR really aligns to how we as humans already naturally operate,” he adds.
“AR is a key component of our ability to drive engagement and enable people to do some fun and really exciting things,” he says.
“It’s true there’s a lot of potential privacy risks in the way AR is used,” he adds. “We look to always put a time limit and an expiration date on data and keep it for the minimum amount of time needed to get it to work effectively.”
09.50 Next on the agenda is augmented reality (AR)—the technology we use to layer digital elements including everything from interactive filters to gaming assets into our real-world environment.
Joining Charlotte onstage is Lauren Cason, a creative technologist who’s worked on award-winning video games including the charming Monument Valley 2, and Bobby Murphy, one of the cofounders and current CTO of Snap.
09.45 As a society, we need a project, says McCourt. He wants to know where the innovation is, and where we can find the space for discussions regardless of politics, backgrounds, race and ethnicity. Focusing on a single project, such as the race to get a man on the moon, or the human genome project, helps to focus people’s attentions and heighten the likelihood of actually achieving something.
“We’re in a new Cold War,” he says. “We want all democracies to get involved with Project Liberty, but Europe seems like the place to start because they’re ahead in terms of public policy objectives, such as human rights. In the US, our technology is being used much more like it is in China because it’s centralized, our data is scooped up and can be used to manipulate people.”
09.25 If our information is corrupted, everything just becomes noise, says McCourt. Focusing primarily on fixing the inherent problems with social media doesn’t mean it’s the be all and end all when it comes to cleaning up the internet, but it’s a worthwhile place to start because of its disproportionate power. “I just don’t think tweaking what we have is going to work,” he says. “This is the moment right now to fix the internet—and get it right this time.”
09.10 What will it take to remake the internet into a fairer, more equitable place? Frank H. McCourt, Jr, a civic entrepreneur and the CEO of investment firm McCourt Global, believes the current internet is built on a fundamentally broken model. Instead, he believes, we should be looking to a new internet architecture built around the needs of users, rather than corporations.
09.00 Hello, and welcome back to the final day of EmTech 2022! I’m Rhiannon, a reporter at MIT Technology Review, and today we’re going to be focusing on the technologies that hold the biggest potential to change our lives, one innovation at a time.
We’re going to dive straight in with some welcome remarks from our news editor, Charlotte Jee.
Come back to this page for rolling updates throughout the day as we kick off the final day of EmTech 2022, MIT Technology Review’s flagship event on emerging technology and global trends.
Global changemakers, innovators, and industry veterans will take to the stage to distinguish what’s probable, plausible, and possible with tomorrow’s breakthrough technologies.
We’ll be hearing from some of the biggest names in the industry, discussing everything from how to get promising ideas off the ground and commercialize space, to building tomorrow’s AI and tackling the world’s biggest challenges.
Today we’ll be focusing on unpacking what the future holds for Web 3.0, body tech, and AI. Yesterday’s schedule explored the exciting technologies promising to change our lives.
Programming starts at 9am ET, and it’s not too late to get online-only access tickets, if you haven’t already.