April 29, 2024

Technological Directions (Panel 1 of Technology and Society in the Next Generation conference)



Published May 10, 2023, 10:40 a.m. by Monica Louis


Technology is constantly evolving and changing, which means that the directions technology is heading in are constantly shifting as well. In this article, I will be discussing some of the technological directions that I believe will be taking place in the next generation.

One direction that I believe will be taking place in the next generation is the development of artificial intelligence. AI is a field of study that deals with the creation of intelligent machines. As AI becomes more advanced, it will be able to perform tasks that are currently considered to be too difficult or too complex for computers to do. For example, AI could one day be used to create new computers, design new products, and even diagnose diseases.

Another direction that I believe will be taking place in the next generation is the development of blockchain technology. Blockchain is a digital ledger that can be used to track the transactions of any type of asset. This includes not just financial assets, but also digital assets such as intellectual property. The reason that blockchain technology is so important is because it allows for tamper-proof transactions. This is important because it eliminates the need for third-party verification, which can lead to faster and more secure transactions.

In addition to these two main directions, I believe that the next generation will also see the development of augmented reality and virtual reality. Augmented reality is a type of technology that allows users to see the world around them in a modified form. For example, you could see a virtual version of a real-world object. Virtual reality is a type of technology that allows users to experience a simulated environment that is completely immersive. For example, you could be inside of a virtual world that is completely different from the real world.

Overall, I believe that the next generation will see the development of many innovative technologies. These technologies will allow us to do things that we never thought possible, and they will change the way that we live and interact with the world around us.

You may also like to read about:



[Music]


i would like to thank everyone for


making the trip


uh whether it's a few miles or a few


thousand miles


um to our first panel


okay so um


the conference was organized uh by


myself martine sanchez jankowski


and lynn chancellor who are sitting


there


um


and i myself am a senior researcher


at the institute for the study of


societal issues which is part of the


berkeley campus


uh as martine


was the director of the institute


um


and as you know from last night if you


were there


the conference is sponsored by hunter


uh cuny and the berkeley institute


um and we have funding from the sloan


foundation and the gerald huff fund for


humanity


um and so uh we're going to hear today


about the technological developments


that will form the basis for the rest of


the conference


and so we have four wonderful presenters


to talk to us about some recent


technological innovations


um and we're going to start with


dr pamela silver from harvard


who is a systems biologist and a


bioengineer


dr silver holds the elliott


and oni adams professorship of


biochemistry and systems biology at


harvard medical school


and she's known for her pioneering work


in synthetic biology


so uh


we will uh have her talk for 25 minutes


i think we have time for that yeah


okay thank you


good morning


and good morning people on zoom


i want to think about


how we make


a beautiful sustainable world


by 2050 for the 10 billion people that


will be occupying it


and i don't i also want to mention that


in that time period maybe there'll be


people on mars even and


the technologies we develop


for


going to mars are going to be some of


the same ones that we need


to sustain a beautiful life on earth


now this conference is about technology


and i i want to do a little


framing through the lens that was


mentioned last night of


1930 forward


and thinking about the early part of the


last century where


synthetic organic chemistry was


the technology of the day and that


brought us of course


nylon


plastics remember the the film the


graduate right plastics um


look where that got us but lots of


things that have been um


i think


good for um


for for the growth of of uh


of the world in a better place and that


led to the better living through


chemistry which was a


theme that i grew up with


now mid-last century um i'm a child of


silicon valley and so this is very um


intimate for me


is the development of the microchip


which then led to the technological


revolution


that we've been living through


i want to argue that


the next


thing is the engineering of biology and


that is what is going to


transform


life on as we know it


we understand so much about biology


given the research of the last


many 50 some odd years


that we are in a place to begin to treat


the engineering of biology sort of like


synthetic organic chemistry and so hence


we call what we do synthetic biology


so let me give you some of the impacts


that we have already experienced in


in this century around the


the um precise engineering of biology


within the environment um


i will talk a little bit about some of


our work on carbon sequestration


huge advances in agriculture


of course


health um we're we're living that dream


commodities you are seeing uh advent of


um


so-called green commodities uh the um


many has anyone had an impossible burger


uh that is a product of synth of genetic


engineering it's labeled gmo


um but there are many other um


products of high of both high and low


value commodities


and then i want to interject safety into


this conversation um both safety in


terms of


say keeping us safe


from


what's going on in the food supply for


example and conversely thinking about


how we safely deploy the technology


okay so i think that we are living a


global biology lesson i'd be willing to


um


challenge you i in the audience is a


very diverse age here but


i don't know the age online but


everyone all your parents for those


younger know what mrna is they know what


an antibody is that's amazing


so if nothing else i mean we've made


this great vaccine but we've also


started to put


the the language of biology into


the into the world


um so people can and and this is so hard


to do right and and so this has been a a


really a wonderful case where you can


imagine people at all levels is there a


problem oh at all levels um


using the language of biology


now i want to mention um


the development of the mrna vaccines


this is a slide from


my friend melissa moore who is the


chief at moderna which illustrates the


speed the amazing speed at which they


were able to get to clinical trials


for the vaccine that many of you have


have enjoyed


um why is that and i have a little bit


of a personal vignette on this


now there's a tendency to say oh


the dark state we weren't ready right we


were behind i want to give you a


different point of view


that in fact


the u.s government has been funding


research in this space for at least


probably longer than i can remember but


i was involved in a darpa program


it started i think about 15 years ago


and


the pre-moderna


was part of that program


and the dedicated program manager of


that program believed in them and


supported them and i just have to give a


shout out for dan watendorf he's very


reserved but he made this happen and it


gave me a window into how


it's darpa it can be perceived as


defense based but


that it was amazing now there's another


thing that happens here and that is what


happened in the first couple days so how


was modernity able so they were they


were ready at the right time if it had


been one year earlier maybe not but what


was remarkable was the speed at which


they could synthesize


the messenger rna that became the


foundation for the vaccine so


two days right amazing so so that's


transformative


if you're thinking of a protein based


vaccine it's going to take a lot longer


so this technology is literally


transformative now


we heard some about


open access openness and communication


last night and i want to give you the


ultimate example of where open


communication basically saved the world


so


the sequence


there's there's a lot of details i'm


gonna maybe get wrong here but the


sequence


of sars cop2 is key to that vaccine


right


uh


how


so the the it was sequenced in china


could have waited until they published


it right it goes to nature it could have


taken a few months imagine where we


would have been


but instead through the process of


communication through


the internet


this person in australia


received the sequence from his chinese


colleagues and posted it on twitter


is that amazing or what


and so everybody had access including


moderna


and so they went from a sequence that


was posted on social media to the first


day of synthesizing that mrna now you


might say


how do you believe the sequence how do


you know this is real and it turns out


their technology was so fast and so


cheap it didn't matter they could give


it a shot so that's another underpinning


of an amazing technology


um


and by the way i have no interest no


personal interest in moderna um


but um


i i just was


stunned at the the idea


that you would take a risk from


something you read on twitter


and we're gonna see more and more of


that


you can also argue the negative side of


that but let's go with the positive now


i want to give you a flavor my real


world flavor of of the vaccine situation


every year i like to go off somewhere


remote on a sailboat and


was not possible during covid so i took


my first trip a few months ago


about a month ago or so


down to some remote islands in the


grenadines in the caribbean


and as you move it's a little bit of a


hassle it turns out as you move from


every island which is a separate country


you have to get a new covid test and


that's a pcr test except for coming back


into the us which is so so this is a


covid testing clinic


i think on union island


and it's interesting this is the sign um


notice it says the china coronavirus


so these are the the perspectives you


get outside the u.s


this was a pcr based same-day test this


is the wonderful doctor who adminis


comes to this outdoor clinic and gives


us the test if we want and other islands


they would just come to the boat


and and this is what oops shoot what


happened there


oh my god


this is what the clinic looks like uh


outdoors note the turtle sitting under


there


these are some of my um co-mates on the


boat


after their their pcr test


so for me this was like a magical moment


because


i was just starting in molecular biology


when pcr was


discovered and developed and look now


it's being practiced


in the developing world um so for me


this was like magic


okay so let me just frame a few other


ideas that are on my mind and and what


i'm been a little tiny bit about some


contributions we've made so so thinking


about programming life um i think for me


the key things are


the ability what does life do it grows


it uses energy it can make energy and it


repairs itself and so why


does this build why can buildings be


made of wood but they can't repair


themselves


imagine


what i want to dream about are


why can't your clothes repair themselves


or clean themselves


this is a house made of living matter


but why can't that house do


photosynthesis and regrow


so i see the challenge


the grand challenge in this idea of


using biology


and this has been the underpinning of


synthetic biology from day one is how do


we make the engineering of biology


easier so everyone can do it more


predictable and faster and i go back to


the moderna case it's the perfect


encapsulation of what you want from the


engineering of biology but


if you're going to build the house


you've got to go to a different scale


and complexity and so we're we're still


taking baby steps here but


we are making forward progress towards


some of these goals which i think will


be realized over the next


30 years


now why is that


first of all


dna is the substrate of biology so


everything you do well you could say rna


in the case of modernity but you still


need nucleic acid synthesis


and the cost of


making dna


has dropped and it's dropped faster than


moore's law this is one of the or sorry


the cost of dna sequencing my apologies


has dropped faster than moore's law um


now as you know or may not know


um


we can


in principle sequence every human on the


globe


there are projects to sequence every


organism on the earth


this is just remarkable think about it


we didn't dna the structure of dna was


solved


oh god 60 years ago or so and now we're


talking about knowing everyone's genome


everything's genome


that in turn leads to an enormous amount


of information that can be used for


engineering


which gets me to the red line which


doesn't look as impressive but is is


pretty good and that is the drop in the


cost of synthesizing dna which is the


substrate for


uh engineering biology


so now


and and you've probably heard about this


you can


synthesize a virus


no problem um it's about


depending on who's doing it's about a


penny a base pair so you can do the back


or actually in some cases about


0.1 cents a base pair depending


so you can do the back calculation of


how long till


how cheap would it have to get before i


can synthesize a human chromosome for


example


we can now synthesize a bacterial genome


for under a million dollars


and that price will continue to go down


so we have the


the we have ideas we have a lot of


knowledge so what are some of the


problems that need to be solved


well i think we all agree that energy is


one of the global challenges


so we think about global energy needs


and that growth in population um that


means


increased consumption and need for


energy and also a number of those users


are going to be in the developing world


now we


uh although i think solar is changing


this up but in general have relied on a


centralized form of energy production


and distribution that's been the


the gestalt in the u.s and other


developed countries


the question i think going forward as we


develop new technologies around energy


that we might we are going to think


about more how to have those be


distributed


available to all and actually have a low


cap expenditure


so that's something that i think biology


can help deliver on


so remember


biology can you is the best user of


sunlight to make stuff through the


process of photosynthesis


so i'll just touch on something that


we've done in my laboratory


which was to create a bionic leaf


which is the interface between


chemistry and biology so this is a case


of real


interdisciplinary work it was done


together with my awesome colleague at


harvard


in the chemistry department dan nocera


who invented an electric catalyst


that


in response to light will carry out the


water splitting reaction that's what a


leaf does that's key


now what do you do with that so it's


sort of a storage problem right and so


there are bacteria


that are capable of


taking hydrogen fixing co2 and growing


so we can create an integrated system


that is all in one place


can take in light or enter any kind of


energy so the energy delivery can be


remote


and delivered in the form of electrical


to the leaf which can then


grow


biomass


so this


this is still sort of the gold standard


in this space


these numbers may not look that


impressive to you but if you look at co2


reduction efficiencies plants it's about


one percent


algae


cyanobacteria are the natural winners i


have three percent here but they can get


up to eight percent and the bionic leaf


still wins at about 10


efficiency


now that gets me to another


huge problem that was the great savior


of the last century a technology


to develop fertilizer which of course


brought us


the great revolution in agriculture of


the last century unfortunately


the process for making nitrogen-based


fertilizers now is the culprit right


it's it's it requires high high high


heat it's co2 admitting the so-called


haber bosch process


so we would like we really should find


substitutes for this


so what we can do is take our same


bacteria in the context of the bionic


leaf and they will also fix nitrogen


different bacteria but same idea and i


want to just give you one example these


this is um


radishes which were one of the nasa


favorite plants um grown in the harvard


arboretum and the ones on the


far right are the ones that got our


bionic leaf grown bacteria and they're


clearly bigger and i could go on with a


lot of anecdotes about how these


bacteria can support


agriculture but better yet we were able


to form a new a new company called


coolabio which is just doing amazingly


at trying to develop this technology for


use by farmers


it's been fascinating to talk to farmers


and understand


their their what they have to deal with


and so it's not a simple case to change


up how you do farming


and that's another thing to think about


when you think about technologies


now i want to end with a few


inspirations here


this from some work i did together with


neri oxman at the um


at mit


um at the media lab so neri is a


designer but thinks a lot about science


and so together we thought the the dream


as i said would be living wearables so


imagine if you had clothes that were


photosynthetic now we didn't get there


but nary has designed um


uh


these these closed structures


that can be infused in this case with


photosynthetic bacteria which is shown


on the upper right the lower right is an


example of something that she designed


that was actually on the paris runway


and this


design here


is what we infused the uh photosynthetic


bacteria into


that's one idea


the other one that we really are working


on is imagine


uh do we have enough knowledge to


make freeze-dried organs or freeze-dried


whole organisms


and this has enormous implications


this is an example


of a plant that has been freeze-dried


it's called a resurrection plant you can


actually buy it on amazon and this is a


time lapse


of growing it in our laboratory after


you add water and it will come back to


life


so


this is a technology that nature already


does but imagine


the the transformation if we could


freeze dry organs or


freeze dry eggs so the problem with the


cold chain which you've heard a lot


about could really be addressed with


freeze drying


now


i know that this conversation will lead


to a lot of questions about


misuse and and we can have i'm sure we


will have a lot of discussions about


that but let me give you a framing here


that i


compels me and first of all i'm


i'm not bothered by gmos i'll just say


it up front in fact i want to i want to


open an all gmo restaurant in cambridge


that's my dream but but let me be clear


that


with regard to gmo food


you


especially us coastal elites can make a


choice


we can choose to shop at whole foods pay


more money get our wonderful non-gmo


or we can choose things like the


impossible burger um we can also it's a


little i think there's a social issue


here as well with being anti-gmo because


in the developing world


gmo is going to be critical so i'm often


a little critical of my coastal elitist


colleagues about gmos because i think


there is a social responsibility about


thinking about their deployment in the


developing world but


this is a choice we at least can make


however


let's think about when you go to the


doctor you're in pain let's say you're


in chronic pain you suffer from


crohn's disease or


colitis or


and the doctor says to you i can treat


your chronic pain


you know


you're not going to be asking a lot of


questions about where that drug comes


from i'd be willing to bet you don't ask


any


and unless it doesn't work right


so


i'm gonna offer


that a real key to


socialization around the engineering of


biology really lies in the health realm


um


and and some way to integrate those two


with sort of real world applications i


think has has some opportunities


all right so let me just conclude um so


in case you were


wondering where the money comes from to


do this work


uh there is national investment


um as i mentioned darpa


has been one of the main funders since


day last 20 years i did a darpa isat


report in


2002 i think so by the way we laid out a


road map for synthetic biology then so


that was 20 years ago it's been a little


slow going


however i'm pleased to say that we


reached most of the milestones on that


roadmap


nsf was the supporter of a synbio


research center which is a big deal


when nsf


puts forward that and and dod has


recently made


an initial investment in a manufact


manufacturing is a big deal here


and so


dod has realized that and made at least


an initial investment in


biomanufacturing


and there are a number of private


organizations that fund this work as


well so i want to end with a few other


sociological issues


so why now so there's an enormous


fascination with this field


from students i get letters from high


school students can i please come work


in your lab which resonates for me


because as i said i grew up in silicon


valley and so you know i got to go work


at uh work in people's


labs then so i want to help all these i


get eighth graders it's just amazing um


and and so there's this huge interest


and there's also a huge interest in


building the bioeconomy so there's


there's a huge investment interest and


we we were listening last night about


issues around supply chains and moving


things around and


and of course if you're growing it at


home


and it really is a carbon neutral


process


it begins to solve a lot of those


problems


so we have a generation also i i believe


i'm hopeful that really does want to


solve real world problems and i'm going


to point out climate change and as i


said we also know a huge amount about


how biology works we don't know


everything by any means so we need to


keep funding basic research


we need to learn more


opportunities to combine


disciplines machine learning is key here


and i want to end with


as this grows


and and i think this will resonate for


the aiml people but a need for a trained


workforce we are desperate i mean i work


in the center of the universe in boston


in this area now you guys from


silicon valley can argue with me but i


think i'm at the center i feel like i'm


at the center of the universe and


that said the employment you know as you


have industry growing


remember every startup with a series b


has 200 jobs open layer on top of that


pharma


moderna 800 job openings can and then


plus poor academics what are we going to


do where are the new ideas going to come


from


it we are in we are in trouble here and


i don't have a solution but


i'm thrilled at what hunter is doing i'm


thrilled at any efforts


but we have a national problem so i will


end there thank you


totally fascinating um and a great uh


start


uh to our panel


so we will hear


from uh


dr anita raja who is a cuny


uh professor


um


and she teaches computer science um at


hunter college and she's a member of the


doctoral faculty in computer science at


cuny as well


and so now we'll hear about the ai side


of things


thank you jeremy and


thank you all for being here


and i'm excited to be part of this


conference


so


my talk today is


you know to introduce the definition of


artificial intelligence and what it


covers in


in the current day world and what and


the challenges that we are facing so


that we are prepared for the future


so next slide yeah i should sorry


so i'll start off uh with the definition


uh because there is


i think a little bit of confusion of


what artificial intelligence is and what


machine learning is and i'd like to make


a distinction uh that this is all not


you know the same


[Music]


general definition so let's start off


with ai so what is ai so ai is the study


and design of intelligent agents where


an intelligent agent is capable of


taking perceptions


from the environment


doing some sort of computation or


thinking and then acting on the


environment right so


so that's that entity which is capable


of doing this kind of processing of


information


machine learning on the other hand


a simple definition would be patterned


recognition and


and the ability of a system or a set of


algorithms to improve as you give more


data


and so


an ai system could have several machine


learning algorithms within it because


the ai system will may have to do


understanding of language it has to do


planning it has to cooperate with other


agents and so


at least in my view coming from the ai


side i see machine learning as part of


an integral part of ai but not the


complete definition


of ai


so um


clearly you know just like pamela


discussed we are in an interdisciplinary


area within ai uh


and i think even from the very


foundations of ai um you know we have


cross-cut economics we have to bring in


ideas from philosophy or sociology um


engineering of course and now you you


know some of the projects i will talk


about is how it relates to biology and


uh clearly when we talk about policy


matters you know how does that uh uh


relate to law and so on so mathematics


of course is the foundation of both ai


and


machine learning as well


so


you know very important i think i'm glad


yesterday paul talked about the history


and i wanted to make sure i included the


history of aid so that we can talk about


the


the ups and downs the hype and then not


so great times of or the winter of ai as


it were


so ai basically started around 1940 to


1950


um i think was one of the


new ideas was when mccollum and pitts


tried to model the brain using a boolean


circuit so that was one of the


parallel works that was happening in


addition to


alan turing uh writing about uh com


computing computing machinery and


intelligence and that's where he defined


uh what ai would entail so it's going to


have modules which deal with language


recognition knowledge representation and


learning so at least we were you know he


was able to identify


some of the parts that would be integral


to ai


um and then from 1950 to 1970 i think


there was a


lot of enthusiasm uh in fact this is


where uh you know there was like the


early ai programs


samuels checkers games where we you know


there was this initial uh ai system


beating a human in the checkers game um


and we would officially say the birth of


ai happened


uh at dartmouth in 1956


um and this is a famous quote from john


mccarthy and claude shannon uh from this


dartmouth workshop so it was a group of


scientists who got together and said we


think that a significant advance can be


made


in a.i if we work on it together for a


summer


and they did work on it for summer uh


but here we are uh you know many decades


later uh still trying to understand how


do we move


the needle on ai uh but uh you know at


least we we got


the train started um and so ai became an


industry but as


they started the scientists started


understanding the complexity of solving


ai


we went through ai winter so there was a


darpa funding at that time and


in fact


when people realized that we can't just


imitate human intelligence you know


within a few months


and how far we are from that uh


there was like this dip in in funding um


and


in the 1990s though i think the people


were continuing to work


there was a resurgence and happily you


know people


the scientists were able to move from


what were very strong assumptions uh


about you know linear dependencies and


so on uh to bringing in probabilistic


approaches and in integrating


uncertainty in the reasoning of the


systems that we were building so that's


where we you know started modeling our


systems more to what the real world had


to handle um


so ai became more scientific and uh you


know we would call this a time when


agents were introduced and learning


systems were built as ai spring


and then since 2012 to


present you know i think big data became


the big thing and i think the fact that


we had both uh


computational hardware which could match


the data that was being produced uh and


and


we took the idea of neural networks


which were there between 1950 and 1970


and could make it practical


that's what


really was the uh turning point uh where


where ai became relevant to industry and


that and once we found that killer


application i recall being uh


just finishing up graduate school and


all these discussions in our main triple


ai conference on what is the next killer


application to get ai out there we have


these great ideas


how do we get people to uh to integrate


that into their systems and it was this


notion of big data neural networks and


deep learning


that made that happen


but of course now we are moving through


with production level breakthroughs at


many different angles


which is exciting


so ai is uh


is flourishing you know and as you all


know there's the


we need to have these systems though


that have to have social understanding


and cooperative intelligence so my own


background is from distributed


artificial intelligence which


interestingly was renamed multi-agent


systems during ai winter because


only then could we get funding


and this was pre-grad school for me but


it was a smart move because we could


continue working on systems and then


when ai came into favor we were like we


are back to distributed ai but we were


doing the same researching the same


questions at that time um


so uh the the


yeah so how do we get systems that


you know understand the context that


they are in but also can cooperate with


each other and with humans um to do well


in


problem solving um so some of the


applications you know where these ai


systems need to interact and


rapidly and in a complex way would be


ending poverty and hunger


reducing inequalities


promoting clean energy uh protecting the


planet


offering quality education and in


general leading to better health um so


you know examples would be how do we get


robots to help nurses to deliver stuff


uh to the various rooms in the hospitals


right so um


so my own uh work and i also have done


work both in health and that i'll talk


about today and in traffic how do we get


uh


to you know reduce congestion take


advantage of connected autonomous


vehicles and self-driving vehicles while


humans are still on the road as well um


and do it in the energy in an


environmentally friendly way uh so these


are some of the research questions that


they're working on


and my own north star in research has


been this idea of building cooperative


agent society so how do we get


intelligent software agent systems to


work with humans although my emphasis


has been on the intelligent agent side


shivali will talk about the human and


the intelligent agent connection


which is a nice compliment to uh having


the two talks today today um so


but these agents need to be operating in


environments which are called bounded


rational and here's this notion from


economics right so herb simon uh defines


defined bounded rationality that if you


are going to do if a system is going to


do some computation but does not take


the cost of the computation into account


then the solution that you come up with


is not operational um and so this has


been injected throughout my work so i


bring in this notion of meta-level


reasoning where an agent is or the


software system is able to look at the


problem-solving process from an


end-to-end non-myopic fashion instead of


just the single tasks that you're doing


and you know trade off the resources we


are not only bounded rational we are


also under resource bounds you have to


complete your problem within the set


computational time the computational


processing cost but also other resources


that are needed to solve the problem and


so when we look at this from a single


agent perspective or when we have


multiple agents working together in a


cooperative way to solve problems


it's important to think about these


resource bounds and also uncertainty


[Music]


so one of the projects that i've


been working on for the past decade is


uh is in the area of health


and so it has been how do we


use machine learning algorithms and


eventually builds the decision support


agents to predict and prevent prevent uh


disease and specifically we've been


looking at the pre-term uh births of


babies so babies which are born less


than 37 weeks


so this was funded by nsf and then by


nih most more recently


and what we have found out is that you


know there are great data sets out there


because nih runs all these studies but


how do we leverage these data sets and


um


because there is a huge cost in the


initial part of getting these data sets


which is data cleaning how do you take


this data and massage it


sufficiently enough and extract the


information that you need so that it is


relevant for machine learning right


so we


this is a team with columbia university


hunter college and with cumc columbia


university medical center so we have we


work very closely with the experts in


obgyn so that we have their


input and i think this is a critical


part of any research that you don't want


to be working in your own silos


and just the amount that we learned from


them and vice versa has been tremendous


and it does take time it's been over a


decade um so


we have been making progress in uh you


know the prediction of pre-term birth


but more recently we took our algorithms


which we had developed uh for


improving the prediction which went up


by about 20 percent uh from what the


state of the art was by bringing in our


algorithms to participate in a in a data


challenge that was funded or


uh hosted by the nih um and they again


what they were


trying to get was both industry and in


academia academics to use a


data set that had resulted from a huge


nih study which we were already using as


part of our project fortunately


to to come up with new


solutions so can are there new


methodologies that could


help us in identifying patients who are


at the risk of uh you know morbidity so


mothers who have uh the high risk and


the question that we then took our


algorithms and applied to was


pre-eclampsia ptb pre-term worth itself


is not


a maternal mobility per se but that's


the generalization you know our


understanding of the data and


understanding of the algorithms could


allow us to do


and so we


were one of the seven teams who happily


were able to


win this data challenge and uh


so we came up with new ways of analyzing


the data to identify the mothers who are


at the highest risk but also who are at


the lowest risk of preeclampsia and why


is that important


because if you're at the highest risk


you want to make sure you get the


treatment that is needed


but if you're at the lowest risk you


want to make sure you don't have all


these extra tests or visits or the added


stress uh that could


eventually lead to the disease itself um


so


happily our work was recognized and we


are continuing to do


work on this so i just wanted to give a


brief description of what the data set


is like so you get an understanding of


the scale of this so we had


the data


the study was run from 2010 to 2015 and


data of about 10 000 first-time moms


nulliparous women


um and


and that's one of the most difficult


sets especially for pre-term birth to do


prediction so that's something we


we wanted to focus on because


usually the indicator for instance for


pre-term birth is whether the mother had


a prior preterm birth and that's that


just increases the risk and the care


that would be given


so


we


were happy that we could get this data


set first time mothers and there was a


significant


number of people mothers who had


received


who had resulted in spontaneous peter


but which is another


uh


outcome which is very difficult to


predict so this was a important data set


for us and the data had been collected


over four different visits um so there


were about


i think


yeah so there were three thousand


features from each visit and a feature


would be


information about their blood pressure


or their temperature their ultrasound


information any other surveys


that you know about their socioeconomic


information so we had clinical


socio inform in socioeconomic


information and most importantly we also


had genetic information for the first


time in our study so we're able to


combine this uh this data and uh push


forward the


the prediction results and my interest


specifically is not okay great you're


able to predict who's at high risk so


what what can you do about it and so i


brought my background


um in scheduling and planning to


to to look at prevention right so how do


we take the necessary steps and


specifically i'm interested in doing


early predictions so that you have


enough actions or contingency plans to


handle um what is happening to the


patient


so you know so i won't go into all of


the detail but it took us a year and a


half just to take this data and clean


this up so that again it was amenable to


the kinds of algorithms that we um that


we we had to handle and some of the


issues where there would be


contradicting data


there's missing information you know


because this is real world data so


even that


handling those issues has resulted in


new algorithms for handling missing data


that we are able to share with the


community


so in terms of results so what i'm


showing here is we compared for the


challenge


those who were going to have


preeclampsia versus those who


who did not have like hypertension at


all so preeclampsia is related to uh


blood pressure and


our algorithms were able to show that we


had an in significant uh


improvement in performance when we were


able to use these machine learning


algorithms the other things that we were


able to do and a couple of slides um


images here uh is that i think this one


yeah so this were


can we identify features um you know


when a clinician is looking at a patient


they might have five or six features to


figure out who's at risk but what


machine learning is able to do is give


you more details about the important


features that we're handling


um and also rank order them that this is


what you need to look at um and finally


we also were able to look at thresholds


to give us early indicators so one of


the results that i'm showing here is


that um if the bmi was about 26


for the patient at this particular point


in time of their visit it's actually you


know often you'd be like oh it's 26 and


they're pregnant it's okay but what we


saw from the data was that could be an


indicator that we should take care of


and specifically in our work it was


important to look at subgroups and i


think this is an important issue to


to consider when we come up with machine


learning results that you don't over


generalize what you're looking at


because we had patients from


many different from diverse backgrounds


um in terms of race in terms of


age in terms of socioeconomic


situations and so


right now our analysis is how do we


break this data down so that we are


giving the right prediction and the


options for the right set of patients um


so this is exciting research that is


ongoing um so we want to again that's


part of what i'm saying we want to build


a variety of models so that the same


cohort can be given a chance um


to to both emerge and capture the


different subtypes of uh preeclampsia


another uh


highlight of this work was uh when we


did produce the correct model we wanted


to make sure that it's fair to our uh


to the group that we are studying and


what we noticed was there was inequality


and


i'm not going again these are


uh more technical details but what i


want to emphasize is that we found out


that


there was a high


false positive rate for the uh


african-american


population in our group and that's


because uh there's a smaller


representation of that group in the


training set but when it came to


prediction we were actually giving a


higher rate that of of what was actually


happening among the patients which means


uh a patient was


being could be falsely diagnosed with pe


with preeclampsia and the corresponding


actions could be taken and so we have to


be very careful when we build these


models uh to make sure is it is it fair


and there are you know existing methods


and things that we have developed to


ensure that we take the model and just


don't


implement it immediately but to see that


it's fair and balanced um and so we were


able to then fix the issue uh by by


determining these cert these cut offs


and


and the other thing we found out was the


asian american population was being


under diagnosed so while the


the black race was being over diagnosed


there is under diagnosis of another


group and again just because they're not


fairly represented in the data set it's


not an equal representation um so


so these are i think long-term lessons


to take for all of our as we work on


bringing in machine learning


and eventually decision support systems


and ai into this


work


a second project that i work on is this


idea of connected autonomous vehicles um


so again as i said i come from a


distributed ai background and the key


idea here is that is this notion of


selfish routing so what is selfish


routing it's the


uh you know desire of each driver uh or


each vehicle on the road to optimize


their own path and because everyone is


doing that that is actually bad for the


entire society and we can sort of map


this to a lot of different applications


right so when you are not coordinating


with other agents and there's a limited


resource um and we there are a lot of


interesting paradoxes on


what happens with selfish routing where


everyone for instance for it's for us


it's i-95 or if we take route one um you


know everyone goes to the highway and


instead doesn't take the alternate


routes and hence the highway is super


congest congested right um so some of


the things that we were


trying to bring in is how do we take


this idea of connected


uh nes in vehicles uh especially now as


we move into self-driving cars or


partially self-driven cars um and also


this idea of platooning um


not so great for new york city but let's


think about it you know in the big


highways of texas or something where we


can get groups of vehicles to work


together to sort of coordinate together


and be at very close so there would be a


lead vehicle and it determines the path


for the other vehicles that are part of


this platoon and they are very very


closely uh connected um and we can talk


about long-term


routing plans so that's work that we


have been


doing


so some again a lot of numbers here but


what we're


doing in simulation is looking at


different car following algorithms and


we were able to show that by cooperating


the average speed of the


agents actually increases and the


average travel time drops while we can


also improve


fuel usage and reduce the


emissions


so this was done in this complex uh


simulation called sumo and we are


bringing in multiple agents each car not


knowing


what the other car is going to do but


then they communicate with each other so


that they are able to come up with a


solution which is desirable for the


entire group


so having uh discussed um


some of the projects uh what what what


are issues you know i've already


mentioned this issue of bias and making


sure there's fairness um


we also want to address the issue of


lack of transparency as we do build


these decision support systems and


accountability of the algorithms um


you know


especially when it comes to health or


major finance decisions and you're


making recommendations whether someone


should get a credit uh loan or not um


you want to make sure your decisions are


transparent and so that you're being


accountable


um so yeah i think i'll just skip


through so these are just some examples


you know where we want to bring in


policy uh and uh


and where we have this social network of


of systems which need to work together


in a in a successful fashion


so here are some ideas maybe i should


let me


just pause this and go to the previous


slide


of how of some ideas that i'm currently


looking at to address these issues so as


i mentioned already we are


whenever we build our models we are


checking for fairness but i'm sure there


are


you know


broader ways to uh to consider this in


terms of societal impact


and policy


here's an example i'll just let you


watch this simulation and then i will


describe this


so the goal of this boat is to complete


the circuit uh which is being


described up here right so it's a race


and the maximum maximum utility that the


uh


boat can get is by going through this


circuit but instead what this boat has


learned is by uh


and this the other option is you can


also get some utility by


encountering these turbo


uh pellets as it were um and what the


boat decided is to just skip out of the


race itself and just go through and pick


up all the pellets that are out there


and it was able to maximize its utility


right so


what is the concern here i mean clearly


there is a concern um so


instead of just putting it up there i'll


just


yeah so


what is value alignment um and


this is just a simple example of where


this is you know there's x that you


would like your agent to be


following that the goal for your agent


is x instead it has decided to go up go


and figure out an x prime and maximize


its utility because when it comes down


to how we define


the goal for an agent we are saying


you know here's the problem and what we


want to do is we have defined how you


can get utility and we narrowly say you


have to maximize the expected utility to


win this game or to do well in the cyst


as a system


and that is not sufficient as we build


more complex problems we need to


make sure that the val there's an


alignment of what is considered


useful value uh for the agent and what


the human or the designer would like the


system to do and there are multiple ways


that are suggested so stuart russell um


and you know has one of the ideas that


has come out from his lab is when we do


value as alignment


you don't have to perhaps define the


details of the human preferences just at


a high level


provide the preferences to the agent and


there's this whole area of inverse


reinforcement learning but then the


agent will figure out okay what is it


exactly that the human wants to do and


this is important when we when we direct


an agent to do things which are mission


critical especially right if it's health


or if it's defense


because it could just say oh this is my


goal and at the cost of


ethical decision making or the cost of


the environment or the cost of uh


comfort of of others um so


i think giving a little more freedom


that's one way that one could move


forward with value alignment um to so


that the agent has the freedom to figure


out okay what is it that would help me


get the preference of the human rather


that being too specifically defined


there's also a lot of work as you know


you know the issue of ethics especially


in the past five years


as we have


seen ai being used


in industry there's been a lot of


discussion in our


community


happily it's being integrated into our


education i think having students not


only be aware of what the technical


details are but what are the impact of


their work and this has to stop ground


up


not after they get the job and build


their algorithms which might have uh


challenges uh ethically or you know


societally so um


so


part of these discussions as a research


community have been have been


important and across the board and one


of the other areas that i'm interested


in is building this notion of


cooperative ai and differential


progress there was a


recent paper in nature that


discusses this and i think i put the


link and what are some of the important


uh


characteristics of cooperative ai we


want


systems that have the ability to


understand the consequences of their


actions both on other agents and on


at the human society at large


and all the consequences on the


environment


we also want agents to be


[Music]


communicating


with each other cooperating and


commitment so these are i see these as


almost like research facets how do we do


this better how do we build algorithms


which are transparent unbiased and are


able to be accountable to to follow each


of these research goals


we also want agents to be able to


identify norms and to follow


what are the social norms that are out


there so again when you're put in


different societies


or different uh


groups uh how does the agent identify


what


norms are and some of our past work my


group's past work has been


identifying this type of network that


one is in and doing fast convergence how


do we come up with algorithms so even in


social networks right uh


social network is an example and then we


could have random networks we could have


scale other types of so social network


is a scale-free network um how do we


come up with convergence when networks


have different characteristics so first


identify the type of network and then


have


uh an understanding of identifying what


the norms for that


society would be and uh uh


and achieving that norm


and finally uh integrating this notion


of network thinking has been uh


at least the past decade uh an important


part of my work so how do we


the agents again it it's not too


different from what multi-agent systems


are you just don't think about yourself


but also think about the society at


large


most of my work has been about other


agents but then you know slowly also


integrating the preferences of humans uh


while there are others who are doing


work from the other end like how do we


look at agents and humans and then


move forward with our solution process


so bostrom


and this


is part of the cooperative ai foundation


has written a lot on these


ideas of of differential progress and


this is the notion where


you want to accelerate and i think this


would crosscut any of the technologies


we're talking about today


how do we accelerate the implementation


of beneficial technologies especially


those would that would reduce the


hazards posed by other technologies so


we need we can't be again working in a


silo or in our own narrow box


as other technologies advance how do we


talk to each other and i think


conferences like this are very helpful


in understanding


the impact


of our work


so in conclusion


you know happily ai has made tremendous


progress in the past


decade or more


um


and


i think there's


you know


now instead of just moving from games so


initially when i got i was doing uh my


graduate school we were looking at


machine learning it was mostly just


games and uh


and then it was a game called alphago


which put us on the on


on the radar of industry where


this system called alco beat the best go


player in the world lease it all


and


in a very different fashion from the


chess game that was played in 1997


against kasparov kasperov


so the difference was with kasparov when


casper was beaten uh it was more a


complete search whereas here


the system in alphago was able to


self-learn and come up with algorithms


that no human had even thought about you


know it's moves that looked really bad


to the experts and eventually one uh


beat the experts so um so we are


definitely making progress and and games


are just an abstraction of what we can


do in in real-world situations you can


take that and apply it in traffic you


can apply it for um you know


like farming


and agriculture how do you


get to optimize your resources and get


the best results outcomes that you would


like


but it is critical to consider downsides


i i put a more positive uh


you know radiology often gets beat up in


at ai conferences because


you know vision has gotten so


vision research has gotten so good that


often the you know the discussion is oh


radiologists are going to be replaced by


ai but here's a quote which i completely


agree with you know that i think humans


are important in the loop we are not


replacing the human in the loop


but definitely ai won't replace


radiologists but radiologists who use ai


will probably replace the radiologists


who don't


so it's not just for radiology but in


general i think ai is an assistive


technology


which we should leverage so that we can


as humans be able to focus on the more


complex


problems and ai is able to do the things


which you know which might be easier uh


in some sense but computationally


intense as well so


we


hopefully humans would be


freed to do things which are more


interesting and more complex as a result


of ai so that is the goal that is what i


hope to train my students to do instead


of you know replacing


jobs so that's always a concern one has


to


be aware of and there is the role of


government you know it's important that


government recognize the importance of


ai give it the freedom to do the work


where it's needed but also set rev uh


limitations when things are


are going very right so if uh


so they need i think the government has


to keep people informed about what is


going on in the research and that is


when you fund the researchers the


researchers give the reports and that


information and i think it's already


happening


our reports are becoming more detailed


you know be very detailed of how this is


going to affect society but i also think


that uh researchers should


identify what the potential bad effects


are of their work so let's give that


give give a thought even when we are


publishing papers um you know so what


are the not just the limitations but how


can this you know what could go awry and


what do you need to do about it


um


we couldn't cover the whole space but at


least we start thinking about these


issues


and finally i think there's a role of


the ai community


to share data this is always a


challenging issue


even more so within health but that's


where we need data uh you know 10 000


patients


is with 3 000 features for every visit


sounds a lot but that's not enough for


the kind of work that we are doing we we


actually need like you know five times


more and the right type of data that uh


would help us with our learning process


um also i think the community has to


avoid hype uh unfortunately our field


does go through these cycles of of of


too much hype and it might be from the


outside so


researchers need to


be very specific about what their


results are what the challenges are and


they need to connect to the users um


and finally you know


i think the goal of our research should


be to empower people and not to devalue


uh


humans so that if that is clear and


that's why we are moving forward um i


think there's continued great excitement


for the ai community so


thank you for your time


[Applause]


okay


thank you


for that wonderful talk um so next up


is dr shawali mohan


who is a senior member of the research


staff and principal investigator at the


xerox park


organization in california and she works


also on ai


with a special attention to human


machine collaboration


good morning everyone i'm very excited


to be here i'm shivali


i'm a member of research staff at xerox


park some old-timers may know it used to


be a famous research lab we still


believe we are doing good things but


here i am


i titled my talk humans of ai and


well it's a tip of a hat to the famous


collection of photographs called humans


of new york and we are in new york so i


was excited to title my talk this but


also this is a critical


introspective talk as an ai scientist a


lot of the times as ai scientists we


tend to get super excited about


algorithms and systems and efficiency


and models


but we forget where these that some hum


that humans that these systems are being


designed for humans and those humans


would like to do something interesting


with these systems right so i want to


sort of turn our discussion over its


head and think about humans of ai first


humans who are using ai systems to do


something productive and that's what


i'll be talking about today all right


like with any good ai talk i'll start


with some inspiration from hollywood so


we've been fascinated over past you know


century fascinated with ai systems right


we envisioned these societies where


little robots and humans are living


together doing good things and what's


exciting about these things i think is


that these


entities are intelligent collaborators


they're independent long-living entities


they're goal driven they solve problems


they interact and communicate with


humans empathize with them and they


learn from the experience right that's


what makes them these entities very


exciting but if we look at ai research


now it doesn't look like what we thought


ai would look like right if you are


clued into the hype cycle of we recently


got to know about dali too which can


it's a trillion parameter model


that can generate images that look like


art right so


the focus has been on larger models


larger data sets and


beating the state of art and that sort


of ends where that's the end of ai


research and this doesn't look like what


we you know hollywood taught as ai


should be


right so the question that my research


and other systems


and you know anita is part of that


community they ask is will algorithmic


research by itself lead to intelligent


collaborators that we were excited about


and the answer tends to be no you


actually have to put all of that


algorithmic research machine learning


research and ai systems and there are


communities and people who are exploring


that right so anita showed you what an


ai system would look like


the view right so ai system living in


the world it perceives the world it


represents the current state it knows


certain actions that it can do it thinks


about what's the right action given its


goals and it acts right so and on and on


it goes it's trying to achieve certain


goals in its environment


however


if you deploy this system there is a


critical part of this equation that's


missing there's almost always going to


be a human involved in this loop


somewhere but the the


science doesn't know where to put the


human in right so we don't know how to


put the human into this loop the ai


system is is is operating in and this


has been like recently people have been


starting to bring bring this question up


right so we have articles that are


coming out that say


that you know we in ai science we don't


know how to represent humans how to


model them how to interact with them


and this is where i think social


sciences and ai science computational


sciences can interact because social


sciences have studied human behavior


human learning


human


human lives and we can bring those


models into ai to


you know develop better collaborative ai


systems right so that's where a lot of


this talk and my research


will go in go in the direction office


how do we build intelligent


collaborators that are designed to


support the goals of a human partner so


we are starting from this goal that we


want to


support a human doing a task we want to


model the human partner explicitly we


want to understand how they will react


uh to changes how will they learn how


will they behave and then we are really


focused on figuring out ai systems that


will have effective performance on human


tasks so we are trying to move away from


computation-centric metrics of


efficiency largeness of the model


accuracy to more


uh metrics that focus on effectiveness


human tasks right so we want to look at


safety and health applications we want


to look at


acceptability and transportation and


i'll talk about those a little bit but


what this does is this opens up the


space of applications that we can deploy


ai in and


i'll talk about a few of these problems


so i wish i had a great you know unified


theory to give you like this is how you


would include humans in ai systems or


models of humans and ai systems but i


don't so what i'm going to do is i'm


going to do this case study style so


i'll present three different projects


that i have worked on in the past and


we'll talk in each of those projects


they're from different domains we'll


talk about what does it mean to model


humans and bring theories of human


behavior into ai reasoning and then i'll


conclude with some you know closing


thoughts all right so the first


topic is interactive task learning and


the context of this problem is so right


so rope as if you are from silicon


valley like i am you know robots are


going to be here and they will live with


us in our world right there's already


robots running around in mountain view


delivering pizza to graduate students so


that's already exists and now amazon is


releasing more robots that will follow


us around in our homes right the robots


are here


the problem is is that it's really hard


to program a robot and everyone's home


is different and everyone really has you


know set patterns that they want


you know their their people who live


with them to fall to fall excuse me


now one way of deploying this is that


every time a robot is put into someone's


home there is like a team of engineers


that comes with that robot and then they


program that robot so that now it can do


the things like it could make a chapati


like your mom does right so but but


that's that's just unscalable there's


not going to this solution will just not


work right so the other way to think


about this question is that can we


design robots that can be


that can be trained by the end users of


those robots right so humans


are natural teachers we teach each other


all the time whenever someone comes to


our home we teach them like oh this is


how we have organized our kitchen can


you please help us right so we want to


leverage this natural capacity that


humans have to teach robots and then


have you know robots program them


themselves through these natural


interactions so that's


where the problem of interactive task


learning comes from in 2017 there was


this meeting


a large interdisciplinary meeting with


people from machine learning cognitive


architectures robotics psychology


computational linguistics all these


people got together and talked about


this problem of interactive task


learning how do we design agents


that


you know learn from and teach humans


while they live with them and so it's a


great book that came out of that effort


and i would encourage you to read it if


this is something interesting


right so


interactive task learning is a very


different paradigm


you know if you think about machine


learning and how machine learning


systems are taught these are you give


them an input you know there's a data


set which has the input signal and then


the supervised signal that goes with it


right that's how those systems are


trained but that's not how humans think


about teaching or training and here's an


example of what


put it on your hands too


oh good drawing try again


on your head on your head


put it on your head


what


oh good try again


good try but you're missing your head


you got to put it here on your head


you want to try again


you need help help okay


okay so


this is what like natural interactive


task learning is right so that was my


friend's son ishaan and he's talking to


his nanny satsi


and they're trying to basically said


he's trying to teach isha on how to put


a hat hat to how to wear a hat and you


will note i mean and this is not


surprising but there is no like training


phase testing phase training phase


testing phase it's all incremental


there's experience that ishan is now


analyzing and then extracting useful


information from so that he would be


able to do the task right it's all


online


you know classical machine learning


setup would like okay let's take the


architecture offline retrain it bring it


back online and now it performs


certainly that's not how humans learn or


behave


it's and i think one of the most


interesting things is


this notion that ishan knew when he had


failed at the task right he failed to


orient the cap properly he knew it and


he asked for help and that's very


critical because learners know where


they're failing and why they're failing


and human teachers can really flexibly


adapt to that situation once you realize


that you're you know the person that


you're working with does not know this


component of the task you will really


hyper focus on that right so that that


nature of human learning the active


nature the interactive nature is


critical and that's why we learn so


efficiently right we learn from small


few experiences but these experiences


are highly salient for that task and


then one but the final critical part


about human learning is is this


benevolent teacher right the nanny who


wanted to teach sitsi test so we want to


incorporate all of these notions that


are part of human training into


designing robots right so i the


uh kinds of approaches i've worked with


i'm most excited about an approach


that's built off of cognitive


architectures so you could you've


learned of deep learning architectures


they are not this architecture these are


different class of architectures um


you could think of them as blueprints


for generally intelligent behavior right


so it's an intersection of cognitive


science and artificial intelligence the


research has looked at what is the


computational basis for generally


intelligent behavior in humans because


they are the only exact known examples


of generally intelligent behavior


right and then can we extract something


useful from them and that implement


algorithms and software that operate


like humans right so that's where the


philosophy of the design is


is


basically comes from and they've been in


development for 40 years so there have


been progress there's built systems


architectures that you know do that the


prominent one that i work with is called


soar


and so it's currently hosted at


university of michigan that's where i


went for grad school and we've been


working on this interactive task


learning problem since 2012 right so


this is year 10 of trying to build


systems that can be trained by humans


through natural uh interaction and the


most recent advancement was that we won


the best the the work at michigan won


the best demonstration award at triple


ai so this was an agent called rosie


that could be trained to do certain


kitchen tasks by a human through natural


interactions i'm going to talk a little


bit about two recent advances that we've


made that we've done at park so the


first one was before we design an agent


that can learn from natural human


teaching


it's very useful to understand how do


humans teach and why does that matter


right so


machine learning and a lot of ai people


would have us believe that humans are


only good for giving us label data but


that's not true humans don't think like


that humans don't teach like that right


so we wanted to unpack what human


teaching is and so here's preethi who's


a graduate student at university of


michigan she basically constructed the


study in which she had people come in


and play around in this blocks world on


the top


right corner and use these blocks to


build a wall right and then we analyze


this data and try to get some get out


get extract some general principles of


human teaching so though


and then they were


so


so there's some findings right so the


first one is that people naturally break


down complex tasks into simpler


components right so in in the task of


building a wall people naturally block


broke down you know okay these are red


objects these are green objects you have


to place them next to each other right


so they broke down a very complex task


which was sequencing putting these


objects in a sequence into simpler


components that now the robot can


potentially reason with in a very


limited you know hypothesis space


people use and express the variety of


teaching intentions so it's not just


people are not just giving data to the


robot but they're also trying to


evaluate the boundaries of competence of


the robot so they will ask okay now can


you place the red object next to the


blue object just to evaluate if the


robot actually knows


you know the concept next to people are


flexible and they react to failures in


the robot right so that's a good thing


that means that as


we expose the failures of robot to the


human teachers they would react to it


and would give the robot the right


information and then finally people


organize these concepts into very


distinct what we call curricula


and these curricula were flexible were


adaptive and then they were influenced


by how people you know people's


background so it's a very cool paper um


i encourage you to read it so as now we


know how humans teach we are now also uh


making progress on building


architectures that can learn from that


kind of teaching so the architecture is


called eileen that can demonstrate


human-like learning


and it basically learns in a way that


humans do right so it's an interactive


learning loop where there's a teacher


who's interacting with the agent


using you know a uh situated environment


where you know the teacher is again


trying to teach the robot how to play


with the blocks world and this uh the ai


learning edge as anita said that you


know it has several machine learning


logical inference


uh planning


uh components all put together into a


system so these are not independent


algorithms that usually uh when ai


people talk about they talk about those


algorithms but they are put together in


a system that has interesting behavior


right and what we are seeing is that now


we have architectures that do


multi-level reasoning so at the


bottom-most level we have


inference methods that are reasoning


with the metric space that a robot has


to deal with right the real world the


real perceptions the robotic control and


all of that but then a level up


there is a there's a level of reasoning


that reasons about tasks and goals and


this is like becoming more human-like


where it's thinking about okay what is


the final goal i want to achieve how do


i get there and at the top most level it


is reasoning about concepts in general


right the knowledge that it has general


knowledge it has about the world so we


are we are now beginning to sort of lay


the foundations of an architecture that


can reason at multiple levels just like


humans do and we found that you know


this architecture learns in a way that's


that very similar to how humans learn


right so it can learn various types of


concepts it can ground natural language


onto those concepts it learns very


quickly generalizes rapidly because the


trainer can give it salient examples and


then like you know like humans it's very


opportunistic in its learning it can


figure out where it fails ask for an


answer and then learn from that and so


what i wanted to highlight is that as i


come to the close of this particular


case study is that design of ai systems


is not just algorithms and data right


it's it's also cognitive science because


they're working we are designing these


systems to work with humans so we need


to understand what those humans are


doing


and with the structural guidance from


cognitive science and psychology we are


able to define the desiderata for ai


system design and then implement those


algorithms right so it's it's an


interdisciplinary pursuit


all right so several others of my


colleagues are doing wonderful


fascinating work on human robot


interaction and collaboration all of


which is focused on trying to build


robots into human spaces and make them


safe to to live with humans right so


it's it's a great field there's a


conference that's a yearly conference


called a human robot interaction so if


there's any interest you could go look


up those papers


all right so switching gears a little


bit i'll talk about sustainable


transportation


um so


a lot of you live in new york i come


from california so this scene is very


common where all the highways are choked


up and anita already talked about when


you know how ai can help solve that


problem right congestion wasters waste


several billion hours of time and


several billion gallons of fuel per year


so rpa transnet back in 2013 came up


with this program where they said hey


can we do some can we use technology to


help solve this problem right and we can


like we can come up with energy


efficient routes for everybody who's in


the transportation network but there's a


critical part missing right so you can


come up with the most energy efficient


plan and you can tell it to the human if


the human doesn't accept that as the


recommendation to follow you have done


nothing right so there's a critical


human component to this problem where


you have to influence the human to do


what the system thinks is the best thing


to do so that's what we studied as part


of um the paper here in jer is is what


does it mean to influence a human to


follow a route that a you know an energy


efficient planning system is


recommending so we define this problem


we call the influence problem we imagine


there's dr jane who goes to her office


every day at uh in the morning at 9 00


a.m


and she has a


assistant installed on her phone that's


mo that has access to her calendar that


has access to the transportation network


that dr jane is embedded in and it knows


that you know she can drive she can walk


and she can take this specific bus and


it also knows some personal aspects of


dr jane right so it knows that she's


employed in a regular job she cares


about her environment values her free


time while she travels right so using


all of this information


copter which is the agent that we built


can at a time lead


at a


in a timely fashion at 8 45 a.m come up


with an acceptable option that hey if


you walk to this bus bus stop near your


home there's a direct bus to your office


and that's what you should take and it


makes this message compelling by saying


that by accepting this route you would


reduce emissions to 10 by 10 percent you


would contribute to


reduction in image emissions right so


there's different aspects for


influencing humans


and the paper lays out the mathematical


framework of how we can do that so that


you know ai computations can process


that model


and then


personalize the recommendations to dr


jane so that those recommendations are


actually adopted by dr jane right so


some of us may be familiar familiar with


this theory uh from behavioral economics


choice theory which projects the choices


or the route recommendations that you


may have for dr june into her personal


utility space right and when i'm saying


hey you shouldn't drive but you should


take the bus i'm asking her to pay a


cost in that in terms of that utility


and the probability that dr jane is


going to to accept that recommendation


or adopt


that recommendation is going to be


inversely proportional to the cost so if


the cost of transitioning to bus is


lower than transitioning to you know


biking to her office then she's more


likely to go to the bus right so that's


the main idea and once we define the uh


influence problem that way did we define


acceptability that way we can bring


those notions those notions of human


modeling into ai planning and we can say


that instead of planning


for you know the most energy optimal


route we are trying to figure out routes


that lead us to maximum expected energy


savings right so instead of going from


actual expected


to to like the most optimal energy


expect uh sorry optimization we go to


expected energy optimization


so a few interesting findings first was


that we built we were able to build a


machine learning model that was able to


estimate acceptability of a mode


using a large data set and this model


can um you know predicts with diverse


feature sets and can capture complex


non-non-linear relationships


we found that acceptability does


influence adoption so we validated it


through a choice experiment with


participants from los angeles and we


found that if we set up the system the


way we did


people would actually stop you know


reduce their driving and move to more


sustainable modes of transport and then


we built an agent-based model to our


simulation of los angeles transportation


network and simulated agents or humans


in that model to see if humans were


behaving like our model said they would


would it even lead to energy savings and


we found positive signal there so that


would lead to some significant reduction


in fuel and delay right so that that's


positive news we weren't able to really


deploy this model but i think we have


set up the right sort of evidence to


show how


you know models from behavioral


economics can be combined with ai


planning and together they can solve


this complex problem


all right so very quickly i'm going to


just go through the final


case study that we had


this was


an nsf nih funded


project


and the premise was that


behaviors rooted in sedentary lifestyles


impose major healthcare costs


right so these would be people not


exercising enough not eating well enough


and that lead to severe


health problems that increa uh you know


puts more cost into the healthcare


system


so the project was focused on like how


can we design technology that can


support people


in disrupting non-healthy behaviors and


building healthier behaviors


and one of the most impactful ways of


doing this is through human to human


coaching where a coach sits with a


trainee and helps them you know figure


out what's working in their lives what


isn't and what's the right way they can


change some of their behavior so that


they get healthier


now with the advanced advance of mobile


technology and artificial intelligence


there's lays open the space of what can


an ai coach that lives in your phone do


similar things


right and to design this ai coach


we really need to know what underlies


human behavior and how that behavior


changes right so it's a very cool


problem for ai and ci and cognitive


science and that's why i'm super excited


about it so at park we built two systems


that are designed


um to


solve parts of this problem so new tree


walking


is an exercise prescription


algorithm it's embodied in a mobile


health coach that lives in your phone


and the goal of the system is to get


people walking at the american heart


association recommendation right which


is 30 minutes of moderate intensity


walking five days a week right but if


someone who's super sedentary you can't


really ask them to


walk at that level because first they


might hurt themselves second they will


fail


and then third they will never do it


again because you know they hurt


themselves and then they failed so you


have to put them on this ramp that


slowly progresses them towards this goal


so the the you know our research looked


at what would it mean for an ai system


to have to build out this ramp and have


people you know walk up this ramp slowly


so that they're successfully able to get


to that


right and again it's the similar kind of


an approach we built a model of human


behavior of human walking and how human


aerobic capacity grows and then we tied


it together with


our understanding our models of


motivation so what motivates people to


uh


to pursue certain behaviors how do they


set goals and things like that and then


together that was brought into an ai


heuristic scheduling algorithm


framework to then build an agent that


could actually you know get people


walking more


um so there's several firsts with this


this kind of work first was this was an


actual


longitudinal ecological study of ai so


usually when you think of ai systems


they are very transactional right so you


put in a search query outcomes a page um


you


and usually that's how ai systems are


projected at but this was you know a


long-term ai deployment it worked over


six weeks right so the ai actually had


to have adaptive behavior for six weeks


right it was ecological in that that


people were not brought into labs to


study the behavior of ai people actually


installed this coach on


on their mobile phone and lived their


lives as they would right so we weren't


changing anything about their life we


actually were trying to influence them


as they were living their daily


lifestyles and we found that as we you


know build these


ai systems with human models


they are effective like people are very


able to build healthier behaviors and


what this research also made soup made


it super critical to me


is that ai


usually ai scientists think of


evaluating ai in a very computer


computation-centric manner in which we


look at efficiency


we look at accuracy and


we look at you know the speed of


computation and things like that but


when you bring these ai algorithms into


a context that's critical for humans you


have to revise how you will be measuring


efficacy right so i was working with a


physical therapist on this project and i


told her that you know we you know this


system is very highly accurate and she


said i don't care about accuracy at all


i only care about safety of these


systems because i don't get to see my


patients if the system is deployed which


means that they are likelier to hurt


themselves so i don't care if you get


them you know walking


more at the faster speed i i want them


to be safe right so that exposed to me


that as we're building these systems we


have to rethink how we are evaluating uh


what these systems are doing and again


it's like follows a similar you know


philosophy combining multiple different


fields together with ai to build uh


productive systems


all right so


just ending um


when we think of air trees when ai


scientists usually think of ai we think


of this deployment scenario right


there's an ai system that's interacting


with the world and i delegate some tasks


to ain and they get done right that's


how typically


ai is thought about but actually it's


much more complex than that there's


going to be various different


collaborative interactive


experiences with ai that humans are


going to have and unless we study humans


their behavior their goals and why do we


do certain things we won't be able to


study these really important


ways of creating human


ai pairs


and


that will stop us from studying these


really critical problems right so i


talked about interactive task learning i


talked about sustainable transportation


and health behavior change but then the


spectrum is very big and almost any


deployment of ai that you can think


about they're always going to be


human or a set of humans involved and we


have to think about what they're doing


and what their beliefs are


right so some gratitude to all my


colleagues and funders


and just i'll end with some key


takeaways so modeling humans in ai


systems is necessary for effective


development that's mostly just focused


inwardly focused it's it's a mantra for


ai scientists


um the second thing that's that's maybe


um


interesting to colleagues here is is


that social science frameworks and


theories have really great starting


points that can help ai system


scientists develop better ai systems


right the third thing is um


you know coming up with the right


metrics and evaluation strategies for ai


so if you evaluate ai devoid of context


you will not solve the right problem so


we need to really focus on that


uh we need to build ai systems that


learn and reason about humans like


humans and then i talked about three


different problems but there are several


others that are similar thank you


oh yeah that was fascinating um


so we're going to now uh have a talk


we're going to go back to the natural


world and technology


uh and we will have our final talk by dr


genna wagner


wagner uh and he is a professor now of


finance at the columbia business school


but he has also served as the executive


director of the solar geoengineering


research program at harvard


and he studies geoengineering and


environmental and green tech


thank you


there we go


um okay so


my mission today is to convince you that


anything and everything


related to


technology and society technology and


climate especially


is


at the end of the day


about moral hazard


now just full disclosure


once you have a hammer everything


feels like


the exact same thing so yes there's a


book where this came from


but um


more importantly it is this


interaction between


um


hey there is this newfangled tech


here's the tesla ai guy


and then there is how we'd like to


organize ourselves a society


this is tesla ai guy


going to europe on vacation


and finding that wait


there are ways to organize ourselves


that do not involve having a city with


600 000 people


and the bay area with 20 million people


around it all right san francisco bay


area


and pretending that that has anything to


do with


how we should organize ourselves as a


society sorry


and anyone else living in silicon valley


but right so


um


maybe this is just me in my third floor


lower manhattan walk-up making myself


feel good about


my life


uh but frankly well welcome to my


twitter feed making fun of bourbon


nights jersey heights and anything


anyone in between


about basically


us knowing how to organize ourselves as


society on the one hand and on the other


looking out for techno fix after techno


after technofix


trying to turn the climate ship


around okay


now big picture


this is about climate


emissions have been going up


forever


yes covet


no didn't solve it


we also know the solution


right we basically have known about what


needs to happen


forever


let's say 20 30 years by now


emissions have to come down


eventually that's the solution


and the big question right is it what's


the right mix technology


societal behavioral changes all these


sorts of things well this is the best


week to talk about this of any frankly


monday 3000 pages


2913 pages yet again ipcc the


intergovernmental panel on climate


change comes out and tells us


why it is a problem


what do we need to do


where things currently are going


all right so lots of detail in these


squint at the red line this is where


things are going


emissions over time


look at the other ones that were that's


where we should be


no


graph after graph of the graph


we are not anywhere close to where we


need to


be


right and yes q headlines like


we know what we need to do


it's the politics


and just to hammer home that point


yes it's the politics of course it


always is


the left is the technical summary that


the scientists give us all right there's


the three thousand page report


there's a 140 page summary


provided by the scientists


and then


the past two weeks of this process


the ipcc process to get political buy-in


every country on the planet has a


line-item veto


power


in this two week negotiation to come up


with a summary for policy makers


for the most part that means stuff just


gets cut


well


sometimes


often frankly


it means fairly clear statements


provided by the scientists


we've got to get rid of fossil fuel


infrastructure that's the short summary


of the thing on the left


gets


converted into oh and by the way we can


add


here's the theme a technofix


we don't actually have to get rid of the


fossil fuel plant we can add some carbon


capture and storage that's what the ccs


stands for on top we can just suck it


back out after


and


up to a point yeah we can


the technology


exists


but


just to hammer home the point with


another headline


yeah we are kind of running out of time


we've been running out of time for about


30 years or you know


1965 the very first blur ribbon panel


putting a report


on the desk of the u.s president saying


this is a problem let's act


right and by the way the science is sort


of 19th century science


this is a couple centuries ago at this


point right more co2 higher temperatures


and then by the way


today is sort of an operative day right


if you look at the summary here


uh emissions have to come down by 20 and


25


high


confidence today


april 7th we have exactly 1 000 days


left until


january 1st 2025 right now okay false


precision these deadlines of course you


know yes they are partly political even


though it is a scientific statement


but still


okay so the urgency is clear let's start


with that


now


sometimes these reports do give us new


information and you know in this case


it's


maybe not new information so much as the


new graph the new killer graph the thing


that we'll see over and over again


anyone in the climate world for the next


years


several years


um


there's a whole bunch of different


technologies on the left


and when you squinted these colors here


the blue stuff pays for itself


so let's zoom in a little bit yes it's


wind


yes it's solar of course it's always


been there's a bit of nuclear there are


some other things like reducing methane


emissions and so on and so forth but the


solution is sort of steering us


in the face


right it's


a massive deployment of in this case


existing technologies


yes


and to go back to the theme of


ai guy goes on european vacation


yeah electric vehicles will make a


difference of course


right the bourbonnite stuck


driving


will want an electric vehicle or we


would want them to use an electric


vehicle


and yes that pays for itself so do


hybrids of course they do


but of course well said bourbonite is


still stuck in traffic


right so yeah shifting to bikes and


e-bikes pays for itself


of course it does


it's also a technofix of sorts


right moving into smaller apartments in


cities there's also a techno fix now


more than that right it's a shift in


attitude it's all it's many more things


it's


building new homes in cities


as sort of the ultimate techno fix in


all of this


right noho soho rezoning to put a very


new york specific


um spin on things um but it's absolutely


clear that plan a is to cut co2


emissions


or perhaps more to the point


it's to cut co2 emissions and methane


and other greenhouse gases


and frankly that verdict has been with


us forever


okay um


starting in sort of the 2000s


we basically figured out that look we're


not going to do this


soon enough there is so much hurt


already built in


that we definitely definitely have to


adapt


resiliency um become resilient to


the global average warming already baked


into the equation right new york city


has already moved from the temperate


zone into the subtropical zone a couple


years ago that has already happened


we are not going to stop climate change


here


we know we already have to adapt and by


the way the theme of sort of the moral


hazard right there's a bit of this going


on here already or more to the point


al gore was on the record in the mid


1990s and saying let's not talk about


adaptation quite yet let's solve climate


change first


cutting emissions


and yeah then let's talk about it now


okay fast forward a decade


right al gore himself of course and


frankly every environmental group on the


planet has realized that


actually


talking about adaptation


might actually push us to want to do


more


on the emissions front as well it's not


moral hazard so much as maybe it's


inverse


right sort of the frying pan effect you


whack someone over the head with this


hey this is happening


we have to adapt


maybe we wake up to do more of the


former


as a result


meanwhile


plan a also involves something else


sucking it back out


that's a techno fix


that goes well beyond


wind turbines and solar panels cutting


co2 emissions


right moving to the city and so on and


so forth lots of other things here


and frankly that too we have known for


quite a while


here's


2009 over a decade ago by now well over


a decade by now


um


these are a dozen climate models


climate economy models


asked


to limit


temperature increases concentrations of


co2 in the atmosphere linked to


temperature increases to 2 degrees


centigrade above global average or


pre-industrial


temperatures


that's what this


right set of


columns is these


450 parts per million of co2 for those


in the know


well


look at the outcome


decade ago over a decade ago it was


basically impossible


with some frankly fairly heroic


assumptions


to limit global average temperatures


to the sort of temperatures we thought


are necessary to


have a livable planet we still think


over a decade ago that was basically


gone


now a lot has happened


in that decade


namely


technology


emissions is still going up but frankly


rapid deployment of


solo and


wind low carbon technologies massive


cost decreases


40 years ago when jimmy carter put a


solar panel in the white house and


ronald reagan took it down five years


later


solar panel cost hundred times as much


as it does today


ten years ago


cost ten times as much as it does today


nobody is taking down solar panels from


any roofs anymore we are deploying them


rapidly have come down


capacity has gone up electricity


generated has gone up


massively as a result


things don't look quite as bad anymore


in the sense of


omg catastrophe


but frankly


we are far from


were


most scientists


would say did say monday this week


we ought to be


so in other words


plan a


does in fact include a fourth element


here


right and now we can talk about


inequality and right the rich adapting


the poor suffering and so on the usual


story applies of course


but it is certainly clear that


by now


we more or less know


that suffering is built in


and you know we've known that for


years at this point as well


at the same time


it is clear that there is no plan b


and when i say this definitively


um


i can point to a couple fellow


economists here in shame


um in the


sense of


introduce me by saying i i used to work


on solar engineering i still do um very


much so um


when


people


who are not necessarily initiated in uh


um


finer details of this particular


technological intervention


first discover first hear about


solo geo engineering and i will spend a


couple minutes talking about what it


actually is


their first reaction is very much like


the tesla ai guy


with


um


electric vehicles here's the techno fix


here's the thing that will prevent us


from having to do anything else


in this case of course much more


dramatic right don't have to cut


emissions don't have to adapt don't have


to suck it back out we'll just build


this artificial sun


shield


for the planet it'll cool the entire


planet


everything will be fine


and you know


no surprise the usual suspect i


highlight mr newt gingrich here on the


right right would


pick up on that and for example in his


case literally at the time 10 or so


years ago when on the president obama we


had our last go at trying to pass


sensible climate legislation in this


country


he writes a op-ed saying


ha found solution to climate change


no need to vote for this thing after all


you can solve


climate without actually cutting co2


emissions


now there's a couple interesting things


happening here


in order to be able to say this you


actually have to acknowledge that the


problem exists


which is actually


a good step in the right direction if


you will


but of course then talking about this


as if it were a plan b


you might as well deny the problem exist


right the outcome is the same you're


still voting no


on the legislation


oh in other words


yes i do think we should look into this


particular technology too


we should do the research


solar geoengineering


but no it is no plan b


it's plan a plus


another technology added to the suite of


potential technologies okay so just very


quickly what am i even talking about


it is literally building an artificial


sun shield for the planet


it's basically doing what volcanoes have


been doing forever


so when mount pinatubo erupts in the


philippines in 1991


global average temperatures in 1992


ironically just of the time of the rio


earth summit


june


92


a half a degree centigrade


almost one degree fahrenheit


cooler


than they would have been without the


volcanic eruption


emissions didn't go down


nothing else happened


global average temperatures decreased


didn't solve climate change just to be


clear


oceans are still acidifying


lots of other problems


still way too much co2 in the atmosphere


didn't address the root cause


but


if


temperatures are one of the key metrics


here


well here is a technology and when i say


here's a technology no let's not explode


volcanoes very nearly but maybe


let's do the research to figure out


whether it might be possible


to deliberately


introduce


aerosols tiny reflective


particles


into the lower stratosphere with exactly


this


in mind


okay so


how to think about this


um


it's a very complex


graph


i won't tell you what the time scale is


i won't tell you what climate risks are


but the one thing we know is it doesn't


matter what the time is it doesn't


matter what we how we measure risks if


we burn fossil fuels


those risks will continue to arise


no doubt


here's another definitive statement


if we cut emissions to zero


the sort of thing we know we need to do


well let's assume we get around to it


let's assume we actually do it


in any time scale


relevant to


humans society


decades out


the climate risks


most of them we care about


will stop getting worse they will


and frankly that's the point that's why


we have to cut emissions to zero


they're not getting better and by the


way you see climate risk is largely


climate risks associated linked to


temperatures for example


sea level will rise for centuries after


yes temperatures will stop increasing


yes that's a good thing yes we have to


cut co2 emissions 2-0


on net


but


climate risk isn't going to decrease not


in our lifetimes


so yeah we have to suck it back out


it's the only way to actually decrease


well for the solar geoengineering come


in


taking the edge


off there is plenty of hurt built in


here there's plen there are plenty of


people dying


literally because of unmitigated climate


change and even if we do everything


right


and news flash we won't


but even if we did


there would still be


plenty of hurt built in where solo g


engineering might that's the research


question


actually make a real difference


okay couple more points


yes there are trade-offs


there really are


and when i see heart trade-offs


when one


contemplates does


solo geo engineering


and in this case


highly theoretical in a big way let's do


it to an extent where we stop


temperatures from increasing


well we turn off carbon cycle feedbacks


for example


emitting more co2 naturally because


temperatures are rising


so


actually


even


just looking at


co2 impact co2 burden


if that's the only thing we cared about


solo g engineering itself might still do


a lot of


good


on net


and now we have a hard trade-off


right not that anyone is out there


actually optimizing this in the real


world but if one were to or if we do in


our models


yes


there is a trade-off between spending


the money to cut co2 emissions versus


doing that at a newfangled tank


which of course has its own problems


but then


back to the main theme


there is moral hazard


there's newt gingrich there is my fellow


super freakonomics economist sensoron


and so forth who basically


look at this technology


and say


if we


have this tech available


we


might


will


get away


not doing the hard stuff


not tackling the sort of things that we


actually know need to happen so yes


there are trade-offs


um now


not to put too much of a finer point on


this but


debatable whether this is even


moral hazard in the technical sense or


whether it's closer to a lack of


self-control


essentially right it's us deciding not a


big question is who decides and so on


and so forth but still but


at the very least yes there are in fact


these


these trade-offs


okay


now


does it exist


empirically


when we go out there


and ask people


do you think that the


availability of solar geo engineering


will in fact


detract from the need to cut emissions


in the first place


turns out there are about 30 or so


studies out there that ask just that


question


and frankly


the broad conclusion is that nobody


knows what this technology is


so they'll tell you anything you want


basically and actually that is probably


one of the most important conclusions in


all of this


we went out did one of these surveys


we asked the question two ways


will it detract from or will it lead


toward you wanting to do more


depends on how you ask the question


people just agree with you because why


not right you're a smart scientist


asking them


um


must be a reason why you formulate the


question a certain way


all right the technical


description of this is acquiescence bias


yet yes it's a thing


yes solo g engineering moral hazard is a


thing


but


opinions are so


[Music]


malleable


that you can basically get any question


you want


okay


now


there's a better way of doing this turns


out


we don't just ask a silly question


we observe people


in this case full disclosure by now a


co-author of mine but this study was um


absolutely her own christine america at


all here 600 germans


200 of them are told about solo geo


engineering


in a


lab they're also given money because


they have to show up in this lab or in


this case an online survey


and now they can do with their own money


as they please


including


offset their own emissions


hey wouldn't you want to spend some of


your money to offset your emissions look


you're part of the problem


well turns out


people who are told


about zoology engineering


the 200 germans in this case but still


people


are more likely


to offset more of their emissions


because they've been told about this new


technology


or in other words


the exact


inverse


of what moral hazard would lead us to


believe


right so if basically


technology


nuclear technology geo engineering


doesn't matter what technology is always


about the technology versus behavior


well actually maybe


there's a way to talk about this new


technology


and in that case it doesn't matter which


technology yes geoengineering is one


example


but maybe nuclear as well


or maybe any


technofix


or it is not about


oh


technology might bail us out


absolve of the need to do the hard stuff


but actually remind us that hey


this is a real problem


maybe we actually have to do more than


we thought we did


and


without being too


um


all encompassing here sort of the burden


of economist right we have a hammer and


then we apply it to everything


i'd like to think that applies


much much more broadly


in the sense that


whether it's in bio or gmos or ai or yes


any sort of climate technology from


nuclear on the one hand or anything


that's not quite as


[Music]


kosher to environmentalists as wind and


solar on one end of the spectrum or


solar geoengineering on the other


unless we find a way to have the sort of


conversation


about new


technology that leads to


yes and


we're doing something wrong here right


we are not going to stop


conversation


research into development of


any of these technologies


if they are cheap


if they work


if the risks are socialized and the


benefits are internalized


the usual story


those technologies will come


yes they will


they're not going to solve every


societal problem and yes they have to be


channeled in the right direction and yes


it takes government and yes it takes


lots and lots of other things to channel


these individual wishes and wants and


desires into the right direction


but it is incumbent upon


us


as those working on the technology


talking about the technology trying to


frame the technology trying to figure


out how to channel it in the right


direction more broadly back to our tesla


ai guy


to try to get to this


inverse moral hazard as opposed to


being stuck in this


infinity loop of


any new technology will always be met


with resistance at first because of


course moral hazards dominate


thank you


[Applause]


so we've had uh you know a talk on uh


bioengineering uh


environmental and green tech uh ai and


automation


um and


what do the panelists


think are some of the the sort of


cross-disciplinary themes


that have emerged from these discussions


um and


do any of the panelists


have any curiosities about any of the


other presentations


and do you think that they speak to one


another in any ways


i guess i will open the floor that way


oh that's right


so everyone has their personal link


so yeah so anyone can jump in


oh i have a question from them


so you said that you were


so i don't know enough about gmos to be


angry or excited about it but you said


that you were excited so could you help


us understand


both sides of the debate and then why


you are on the excited side


well that's a big question to understand


both sides of the debate


and i'm probably not the world's expert


but i will just say some things


so first of all i think the first


question is what is a gmo


and that definition actually depends on


what country you're in um


so it makes it even added complexity um


in general it refers to the in the um


uh intro introduction of


foreign dna into an organism


and


let let me say that i think some you


probably know the answer to this some


fraction of the crops in california are


gmos


um of


i think there's corn and soybean


so we are using gmos um


now


uh


i may go off on a tangent here there is


a potential revolution that could happen


in this space


i'm sure you've all heard the words


crispr i


but what crispr allows you to do


in principle is is alter the genome


without introducing foreign dna


and that is one of the potential


upsides because in principle that would


not be a gmo


and in fact i think it's in china they


have declared that crispr engineered


crops are non-gmo


so that's could be a game changer


um


so back to um so i wanted to get the


definition out there


why and then the question is why


is it


what's the danger and


again


there's many answers to that question


the simplest answer is


that you are putting something


non-natural


not made by nature


into the environment and it will have


some adverse effect so the difference


between gmo and medicine is release into


the environment


at least that's my take on it um and so


that


carries a lot of weight


the political side of gmo release


and attitudes about that


are much more complex


and originate actually in europe in the


i believe when monsanto first introduced


uh


the resistant whatever it was


in europe and most of that was


politicized it was political um and the


history and this brought on the history


of the organic food movement


um


but it's been with us now um


so


okay so that so the resist so the


scientific resistance which could also


be in part the em so the emotional


resistance or real is that this could


have some kind of bad effect on me or on


the environment or on things in the


environment it might kill the monarch


butterflies or something like that


okay now the


the biological risks


are


impacts on endogenous species


and and the one that i um


am most


intrigued by and potentially this is the


one that actually underlies the concern


is potential for horizontal gene


transfer and most people don't even know


what that means and that's okay there's


a great movie um oh god i'm gonna block


on it there's science it's it's embodied


in a lot of science fiction movies um


and that's


the actual inherent


scientific thing that people are worried


about now that applies


to corn that is bred at monsanto also


even though it doesn't have foreign dna


in it and if you want to see an amazing


uh


it's not monsanto anymore by the way


isn't it it's buyer i think


but visiting there and watching how they


do plant engineering is a sight to


behold they have this thing called the


the the corn chipper and so they take


thousands of seeds millions of them and


they run them through a machine that


takes a picture of the seed and knows


what the perfect seed should look like


and when it sees that one it chips it


off a piece and sequences his genome so


the corn breeding has


become an amazing state of the art and


so you could say those are


person-bred


organisms being introduced into the


environment so we've been doing that for


millions of years okay so why am i not


worried


first of all i'm i think that um


there's


there's an enormous


awareness of potential dangers and there


are solutions


secondly it's a question of risk versus


reward which is something that it


permeates


all technology and has been


an underpinning of recombinant dna since


day one i was


not very old but my the person i worked


for in


uh hud harvard was one of the star


witnesses in the recombinant dna


tri case before the cambridge city


council and if you want to watch


something an amazing piece of


interaction between


normal people and scientists i recommend


even though they're kind of grainy you


should watch that because it really the


question is what is the definition of


risk and and i think vaccines


capture


the risk versus reward and personally i


think feeding the planet


captures risk versus reward i think


capturing more carbon is a good thing to


do if it requires engineering organisms


and plants so for me personally it's


about


reward i could go on forever but i won't


any any uh


follow-up question i mean i think that's


one of the sort of uh master themes here


um


as well as yeah go ahead maybe just to


put sort of a


another spin on this um


whether it's gmos and bio


ai up to a point although i know very


little about the actual implications


there uh solar g engineering um


our usual shtick


as


economists as policy analysts is a


benefit cost analysis


and frankly


most of these technologies zoology


engineering is sort of vaccine territory


in terms of benefits to costs it's a


thousand ten thousand a hundred thousand


to one


the benefit cost ratio right net


benefits


based on what we know


um are so large


that frankly


to me economist is just the wrong


decision criterion it's not how you can


look at this


and then say oh yes it's a good idea to


do


it is all about


building your point about risk risk


trade-offs


it is all about


not just risk reward in the sense oh


there are these known unknown risks


let's compare them to the known benefits


but it's basically


the risks of unmitigated climate change


compared to the very real risks of a


technological intervention


that attempts to do something about it


it's not about benefit costs it's about


comparing risk comparing instead of don


rumsfeldian right known unknowns and yes


worrying about the unknown unknowns and


both


unmitigated climate change in this case


and zoology engineering yes have both of


them


that's what a research needs to be


that's where the public policy


autofocus


so to follow up you know the question i


had was about incentives with the german


study that you talked about


so


so can you just


put that in the context of so why did


they make the decision so there's a


study population that


that they are going to accept that there


isn't a plan b is that is is that what


was concluded i just want to make sure


okay so this very specific so right the


question is is sology engineering


um


the sort of technology that when you


tell people they will say ha solution to


climate change found don't need to worry


about anything else


and in this case it was the exact


opposite okay why that's your question


um and actually that's my current


research with christina america on


trying to but her hypothesis there are


three hypotheses one is basically it is


this is just such a scary thing


that you want to avoid it at basically


all cost right so if a semi-serious


scientist tells you that this thing


exists or talks about it


um you'll spend your own money


in a frankly vain attempt to try to stop


it right so it's sort of this


this effect of oh my god we gotta stop


this technology


um


the other one is the slightly more


positive spin or very much more positive


spin it's sort of this wake-up effect


right i described this sort of the


frying pan if you whack someone over the


head with this it's kind of like look i


always knew climate was bad but wait if


serious people are talking about this


maybe yeah there is really something


uh to this um and that's sort of the


actually this i think is the


better interpretation the more


appropriate


interpretations of the hypothesis that


seems to hold up fairly well


um which is essentially


a


um


yeah it's sort of the positive version


of this right it's basically hey we


haven't been able to tell people that


climate change is bad based on anything


we've tried over decades


right um maybe this finally does it


which


just to be clear it's a little bit of a


naive


view right it's not like people haven't


tried to shock people interaction before


right


movies exist that talk about new york


disappearing on the you know miles of


seawater or ice right depending on your


perspective um so


um


it's not the first time somebody talks


about this technology clearly not but


frankly


public discourse on zoology engineering


we've only


had it for about a decade 15 years or so


the technology has been around for


decades but there's been a long-standing


taboo not to talk about it because of


the moral hazard because it might


detract from the need to cut emissions


okay and by the way very quickly the


third hypothesis is basically people


just don't know what offsets are right


they think dissolve climate by planting


a tree somewhere and you know those


flash you won't


i'm going to go back to


the engineering of biology because i


think there is


a


not only a perceived but indeed an


inherent risks and


i want to


make sure that


people understand that there is a


huge effort in the synthetic biology


community to define


mitigate and appreciate the risks and


this


this has is really goes hand in hand


with the development of the technology


so for example we have developed


solutions that should protect against


runaway release in the environment but i


want to give one example that is


happening right now that i think is


is interesting and it is


a product of climate change and and that


is um the release of engineered


mosquitoes


into different areas on the earth to to


combat mosquito-borne diseases


dengue


so there's been massive releases in


brazil


there's going to be one in florida


and


this is fascinating technology i'd be


willing to bet a lot of people don't


even understand it but i want to just go


back to you know the 1950s or whenever


when um you know malaria is still


a scourge to the earth right and imagine


if we could cure that with engineered


mosquitoes let's think back to what we


did do about malaria which was to use


ddt


which by the way


worked


india huge that


we just used it at too high a


concentration if they had actually used


it at


maybe 1 100 i don't know the


concentration things might not have gone


so badly for nature


so


and and at the same time had a positive


effect so now


you know ddt is forbidden for the most


part


um


but so i'm thinking about this mosquito


solution


and again if that were to wipe out


local mosquito-borne diseases


i think that would have a huge impact


and a lot of people would not be asking


so what was that mosquito anyway that


they released um they would see the


outcome


and um so i just want to interject that


because it's a modern day thing that's


actually happening


there was another


uh okay to do this i think in florida


which was amazing to me


that's uh interesting and it brings to


mind i mean one of the common themes


here seems to be


sort of how does technology and how can


it train human behavior or retrain human


behavior as you were saying you know the


prospect


of solar geoengineering as a


technological solution might actually


make people believe that these problems


are more tractable right


and i was brought back to your


presentation about training


humans


through artificial intelligence and


intelligent agents um and what's


interesting


there seems to be a sort of common theme


there that human behavior the technology


can aid


humans in retraining themselves right in


some way um and i don't know i was


wondering if you could speak to that and


making the world better for humanity in


some way right


right that's


a great connection i i miss that um


so humans are incredibly adaptable and


flexible so any technology


changes that happen they would adapt to


it like that's the nature of the species


my talk was mostly focused on designing


systems so that we are


supporting that behavior change in the


positive direction right because


behaviors can shift in either directions


and we want at least as an ai scientist


i want i try to be cognizant of what the


eventual outcome is and what is it that


we are building uh


building towards


um


with climate i guess


carnot would say a little bit more about


what he meant when he said


you know that's changing behavior


okay so


i think that you know the


short version


in


i guess i'd focus in on uh and on a


point that isn't doesn't really quite


emerge much but


um


and that has to do with what you said


that humans will really be able


to adapt and are willing to adapt and


i'm not


you know i really kind of challenge you


on that because


i'm not so sure that that's the case


because there's lots of morals involved


uh that people will invoke and whether


we should or shouldn't


that we see this of course in in the


medicine area about whether we should


you know sort of use so i wouldn't quite


go that way in terms of


what people want out of technology which


is something that is you it might be


available but do you want it i think


that's really something that


if you're always dealing with a.i would


be a really serious problem for me


because you'd have to actually whatever


machine you were doing whatever ai we


were going to do


you'd actually have to invoke some sorts


of notions of morals


and in what you were


in in the process of not only it


deciding but providing an answer that


you expected some kind of return to in


in the in the human world


which leads me to the other question


that i would have both for the ai and


for you for automation but i guess i


would say for all of you could you


imagine within the technology field that


yourself


anything that you would say


even if we said knowledge should always


be there that you would be reluctant to


that you would you yourself in the


technology reluctant to go in


i want to address your question in one


and i'll be somewhat brief here about


but in the medical context


and give you a counter example


this alzheimer's drug


that has come on the market


it's


doesn't work


and the fda


even almost showed it doesn't work yet


consumer demand


people want it


so what


what do you do with that and in fact it


may even have negative effects on health


so there's a case where you have extreme


it's almost like climate change


there's


there's no end to it and yet people will


do anything to get that drug


um


and then your second question what was


it what would bad


things yeah i was just gonna say so the


same thing bad drugs


so what is a technology


that can go astray um and that's one


example of many bad drugs


which get


distributed globally


so that's my example


right so i completely agree with you


that


the moral um commitments that we make


drive a lot of our behavior and that's


not where i was going with my answer and


my presentation either i think that's


outside of the realm of


ai and technology that's more about


human societies and how we function as


organizations right that our morals are


defined through our communities through


our experiences through intellectual


exchange with other people and i don't


see a big role of technology in that


specific realm but once you have decided


that you know i want to learn this new


thing or why want to develop this new


behavior that's where i see some of the


work that i've done start playing a role


is to support that


on the second question that you said the


the dangers of


ai technology i think


it's already everywhere so facial


recognition tech for example that's


highlighted


it's heavily talked about


the second thing would be and this is


more recent is using


machine learning ai methods on audio


signals to infer a person's mental


health status right and then


because again these are operating like


black box systems we don't understand


them well we don't unders we can't


inspect them we can't explain their


behavior


the determinations that are being made


which will then lead to consequences for


the person


will be tremendous so that's where we


need more focus more insights on and


there are several examples like that in


ai where you know more critical inquiry


is needed


um yeah


so for a person in technology i'm


actually very skeptical about using


technology myself so i get it you know


coming to


this question or should i use this


technology and what are the moral issues


but also


you know how how was that drug um


authorized i remember reading it in the


news and i'm like this went through too


fast i mean the whole process was weird


in terms of the uh recommendation by the


government uh so it was fda and so on


right


so i think


i do believe that our consumers the


users are going to


have do


do come at it


hopefully doing the right homework right


so when i build my system and i actually


i want to


bring this up when we built the system


for the


recommendation of routes route


recommendation and traffic uh we assumed


that


you know we hoped and assumed that there


would be a certain percent of people


who'll reject the route and that's how


because if everyone took takes the


recommendation by our application


actually it doesn't optimize our our


utility function so we're hoping that


you know there will be these skeptics


and we have to account for that as we


build our system especially when it


comes to like cooperative problem


solving um and in other cases you know


uh maybe there are different ways to


convince again looking at subgroups and


what would be needed to convince them to


um to use that app


for example i i use recently i'm not a


runner but a year ago i got this running


app only because the new york times


after all these years you know i read


this


this article and it talked about oh


couch to 5k i'm like there's no way i'm


going to do this at you know 40 plus


years but i took it up and i can run i'm


not a great runner but i can run for 45


minutes and i'm like okay or more um and


it took several months but it was


because of that incremental um approach


that they took right so um i think


people change over time you and you and


you can there can be different ways of


getting the users to uh to use it and


you don't have to expect that everyone


will use your technology


thank you there's a lot of different


things that i could ask about but i'll


try to focus on the three uh


particularly interesting things that


came to my attention one is when you


talked about the bio leaf how is that


being practically implemented for actual


practical use i'm not a scientist so i'd


like to know how that goes about the


second question that i have deals with


the traffic situation of new york city


one of the solutions to the city of new


york


is to just penalize


drivers from a certain point of entry


into the city of new york or just to


make an increased cost and i was


wondering in terms of your traffic


studies


is there ways to address that i mean


robert moses decided to do that by just


building more roads more roads more


roads and that as we found out they just


kept on increasing the amount of people


that drove on roads and that doesn't


really solve their problems and we also


of course have a problem on the crown


pros expressway and the third thing


deals with the climate change


you talked about geoengineering


it seems to me that that conversation or


that concept doesn't require any quote


cost


associated with in terms of an


individual having to adopt or reduce or


something so it seems to me the germans


might have reacted in a sense to saying


well wait a second i don't really have


to do too much this seems to be a great


you know wonderful solution to


everything so i'll put a little bit of


money to that and that'll solve my


problems if that is the case and you


actually illustrate it on your slide


what is geoengineering what is its cost


and as opposed to introducing it down


the graph


um as a way to reduce the carbon


emissions after you've implemented you


know the other adaptability to reduce


carbon recapturing carbon that kind of


stuff why isn't that at least at the


talking point when a politician goes out


and says like newt gingrich geo


engineering solves everything why can't


that just be whatever form it takes


implemented right at the top and


therefore avoid that discussion about


having to reduce carbons and whatnot and


how do you get around that political


point


okay i guess we should take those in


order um so okay well mine's the easiest


one


the bionic leaf


is a actually a a


closed system it's self-contained so you


can imagine it sitting in a box


and so i think the the comparison to


make


is for example and i'll come back to the


bionic leap but let me make the


comparison to say algae so algae


biofuels big big deal for a while


thing about algae


is it has to be in direct contact with


sunlight


and so that requires huge surface areas


it's like agriculture right


and this has been one of the limiting


factors in terms of algal engineering


for say biofuels


the bionic leaf does not have to be in


direct contact with sunlight it can get


its energy or its electricity from


anywhere so that the box can sit in your


basement you could have a windmill on


the top of your house that is creating


the elect electricity that will go into


the bionic leaf cause the water


splitting reaction


and biomass will grow


so it's it it is not


it's it's a it's a self-contained system


that's one of the the actually


breakthroughs is that you can have the


electrodes in direct contact with the


living system but that system can live


somewhere separate from the incoming


energy


how would that work in dealing with the


problem of food shortages in those


places of the world


where which don't have access is there a


practical application to that the


software don't have access to


food


right so


amazing you bring this up at work so


there is a um


i'm actually in involved i have some


work in this space and also there's a


darpa program and there's even some


startup companies and it's called food


from air


so remember these bacteria


are not they're not using sugar


which would be the normal you'd have to


grow sugar cane give them sugar to grow


to make food


instead they use air they use co2


hydrogen and nitrogen


and so in principle


the bacteria could grow


you could use them directly for food or


you could have them produce


food components


so this is a big deal right now


ironically


if you read the 1970 nasa report


this same idea is in there it just never


was implemented and so um you've hit on


what i think is one of the most exciting


ideas here


i don't know the specific policy that


you were mentioning for transportation


in new york but


i don't think there's only going to be a


solution that's more just technology


right so it's a policy problem it's


incentives and technology would be part


of that ecosystem and there have been


studies that


um


that show that you know uh when there's


uh incentives or sort of artificial


incentives are created so there was a


study out of mit


that said that demonstrated that if we


are paying people to say uh take buses


as soon as we remove that incentive the


behavior goes back to normal right so


they will just revert back to whatever


their baseline preference was whatever


their baseline utility model was and so


that's where the technology part or at


least the


studies that we did start uh making


sense is that there is an underlying


utility that people are using


to make those decision choices and if we


can really leverage that


into our technology systems then the


systems and the route recommendations


are aligned with each how each


individual thinks about their


transportation right so that's where the


technology starts playing a role but i


wouldn't believe that you know that


the technology i talked about


specifically would by itself solve all


the problems that exist and it's going


to be a collaborative policy uh you know


improving our transportation network


changing attitudes so that people do


want to live in cities and you know want


to take public transport so it's a


multi-pronged approach technology would


be only a very small part of it


our next question and let me just agree


with this right so


congestion charging in lower manhattan


right so


you know this is the literally the one


law we have in economics right we run


around pretending that people are like


atoms in a vacuum often and we behave


all like you know


those particular laws not true of course


um the one law we have that holds all


the price up


quantity demanded down right works every


single time or at least a couple exam


exceptions we know co2 isn't one of them


congestion isn't one of them right so


yes


one


the solution if you will for traffic in


new york city is to


make everyone personally pay for the


negative effects onto others when they


choose to drive in new york that's where


congestion charging comes in and just


you know the fraud politics of it all


yes


we're going to get it


but there is currently an environmental


review


run by the state


that will


coincidentally


end conclude


right after the gubernatorial election


why well because it is a state issue for


new york city to be allowed to have


congestion charging and if you're the


governor of new york


um yeah you benefit eight million new


yorkers or in this case only manhattan


sadly because it's just lower manhattan


but of course anyone who drives into the


city right the 12 million


in the suburbs of new york


would not like to pay even though


right


overall societal well-being increases


for everybody


those who then have to drive in a pinch


will be more able to do so


at a cost right and this is sort of the


high cost of free parking right


now okay um


geoengineering right maybe let me end on


a medical analogy um


uh and medical analogies abound here


sort of painkiller chemotherapy and so


on um but it sort of drives home this


point i think of


technology versus behavior um


so


should we tell


the 15 year old who hasn't picked up


smoking yet


hey don't worry


go for it


because if you have stage four lung


cancer


there is chemotherapy we can fix you


or


ask differently should the 15 year old


pick up smoking because chemotherapy is


available the short answer of course is


no


right now if you're the 75 year old


stage 4 cancer patient


chemotherapy is in some sense the only


thing that is going to extend your life


help extend your life right diet and


exercise isn't going to do it anymore


right could or should have


but at that stage


not going to


help well cutting co2 emissions like


diet and exercise


offsetting your emissions is like diet


and exercise in this case um solar


geoengineering is like the technical


intervention


the medicine the surgery the


chemotherapy the painkiller


that might allow you


to


exercise in the first place because it


is the painkiller that allows you to get


up


in the morning at all right so is g uh


sorry uh geo engineering geology


engineering sort of the first thing you


should do


no but of course the real question is is


our planet currently closer to the 15


year old


who should be told to diet and exercise


because that's good for you and will


extend your life or is our planet closer


to the 75 year old cancer patient for


whom


this particular drug this particular


intervention might be the only the best


hope at extending


somebody's life right and


yeah i i won't speculate where i think


we are but


yes let's look into solo geo engineering


because frankly


it is pretty darn late in the game


i i had another question


going back to dr jane when you were


talking about dr jenkins i i found your


example so interesting but i think you


if i if i heard correctly you were


trying to talk about


um


how


how dr j might take the bus right and


to be shown that there would be a lower


cost of that but i was just as i was


listening to i was thinking i mean that


it's so interesting everything you were


saying about bringing in the


the the theme of uh


you know programming humans but but it


would be a very


rational like


you know you're assuming a kind of


rationality on on the part of dr jane


which may or may not you know


work so i was wondering when things seem


to be emotional and not simply rash i


mean if you can sort of count on


rational choice considerations to to be


operative in the example you gave and


then i just wanted to say that i'm


seeing a theme emerging from last


night's paul krugman to a number of you


today and the the sort of important that


technology is doing all these amazing


and


concerning that that balance i think is


going to be a theme but related theme is


that you don't lose the importance of


the human um


the the human balance also like how


human beings are responding seems to


like technology is not overriding that


so how do you make decisions how how do


you you know go in i think and anita was


talking about taking into account a


variety of concerns in some of the


examples you were talking about you know


about transparency etc so it's an


interesting theme and and i just wanted


to ask about


is there a rationalistic assumption here


right so there is i would agree to that


and the reason for that rationalistic


assumption


is that to bring any kind of theory into


ai computations it has to be


mathematical


and the mathematical theories that exist


leverage the rational thinking framework


right so it's just an easy natural fit


so that's


a bias because of the nature of sciences


in the dr jane example we did uh try to


broaden that perspective out so we


did so the influence problem that we


were discussing


does include things like messages that


may be compelling to different people


right so if we know that dr jane cares


about the environment


we can influence her more


oh it's 1 30.


go ahead


so i was reacting to uh your uh push on


that there has to be a more humanistic


component involved and we did think


about it right so the compellingness so


the


the algorithm that i talked about was


talk was mostly concerned with


what should we recommend to dr jane


right the humanistic component would


come into how should we make that


recommendation right because it could be


true that for someone who cares


literally just about how much it cost


telling them that you would cut down


your expenditure by this much by taking


the bus would be the most compelling way


to deliver that message but for dr jain


who cares more about the climate the


better way to think to frame that


message is that you're contributing to


reduction in emissions right so we did


lay out a framework of how that could


happen


integrating those ideas into ai systems


is more challenging because again these


are not mathematical but you're on the


right track with that


just to add to that you know so


i would say that there are models which


take into account that the response can


be different right so just a simple


markov decision process which will take


into account the uncertainty in the


response of the user and then take that


as a sequential process so so because


they have rejected it then these are the


parts possible actions that you can now


take so thinking about an action


strategy not just as a single


uh schedule or a plan but


it's really a tree and you you're


walking through this tree and finding


the right outcome that is occurring and


then adjusting to the current


environment and this can be true not


just about human response but also when


when the agent takes an action the


environment itself can respond


differently you know you think that this


is what's going to happen but that that


did not happen so how do you deal with


that contingency so it has to be part of


the systems that we build um so it's


definitely something that we take into


account


do you do you have an additional


question


i guess it's for the people dealing with


uh you know essentially automation and


ai but i guess


about ai let's assume that you have a


problem that you want have


you want to have solved by


a robot who has been programmed in a


particular kind of way


so you're looking at the program


and so


the outcome of the program is basically


decision one and possibly decision two


given what isn't analyzed by decision


one


but in the process of making the


decision on decision one


it it it is it hasn't


if we talked about the whole human


population


which brings different realities to


a problem that has to be decided then


the then the issue really is taking the


median if you're actually doing this


that is you have these outliers so


you're taking the median and you have to


account for that so even in decision one


you're going to use the median but then


the question really is the interesting


question to me is dcpc is between


decision one and decision two once


that's happening and i'm wondering what


kinds of technologies are actually being


worked on now that actually address


that particular issue


yeah


again a great question um


so i i call this the envelope


effect you know and you can sort of


abstract it down even in a scheduling


problem so if you miss task number one


by a little bit and then that affects


when you start task number two and that


also is missed by a bit


how does how much do you miss your


deadline by and you know i think


some of


the


uh formulations that we would bring in


is like some a thresholding effect like


how far what is the percent by which i


am i mean this is just a simple


um


formulation right so uh how off am i


with


what i expect what am i expecting to be


where i expect to be with this


particular situation and then


re-planning or rescheduling uh what


we're doing right right so uh again it's


a sequential decision-making process uh


which takes into account


all the different ways that you could


land up at least my work takes that


approach but if you keep missing what


you expect where you expect to be and


that adds up beyond a threshold then i


think you should hit the red button and


redo things


go back to that


one question ahead deals with the


radiologists


how do you assure the radiologist


who may just think of his job


as analyzing x-rays


that he himself is not going to really


be out of a job and somehow he's going


to be a radiologist that can incorporate


ai


and the other thing goes back to the


the geoengineering where are we in terms


of geoengineering i mean


you talked about an aerosol kind of a


thing


um what's this status of geoengineering


to deal with that part of the graph


what's the cost uh to society what's the


political framework for that


so i brought up the radiologist quote


so the way i see this i think you know


again this is


uh vision technology and vision


algorithms which have advanced so much


uh


deep learning and so on um and they


are probably doing pretty well


with very specific defined problems and


in the average case you know doing the


right type of prediction so i would say


there are two concerns one is when


things are outliers and i would think


definitely in you know given what we


have seen on the health side of just


preterm babies and clinical data uh on


envision that can be


similar issues right so uh


we can and this this is related to


health and


there are high costs to be wrong with


your results so um i i think that's why


having the expert to handle the outliers


would be one of the questions and the


other is again context


ai systems we are still in the space of


what is called narrow ai you know that


we build systems which are very good at


doing one thing and maybe a couple of


things which are similar but we are


nowhere close to where you know how


humans are able to assess what was this


person's background you know or what


what were they exposed to and what how


does that affect the image screening


that i'm looking at uh so um


i think


as we move towards what is called


general ai um


we have to take this into consideration


but i i think the expertise that


physicians bring in you know sometimes


they are just able to see the patient


and take in thousands of features into


account and make a


decision on where they are in terms of


risk um we are not there yet we are very


specific in that narrow areas ai space


uh where are we with geo engineering so


uh we are at after a couple dozen


peer-reviewed papers


15 years ago


exponential increase in


scientific


interest which means we have a few


hundred peer-reviewed pay 850 or so


much too little to make any kind of


informed decision on


whether it is even a good idea to


contemplate


or put different levy at the stage where


national academies


for the fourth time by now


have come out with a


report last spring in this case


saying


we ought to do research


we ought to have a national research


program


um it ought to be open


transparent and so on right nobody owns


the patent and so on and so forth it


ought to be


jump-starting governance conversations


at the highest levels governance to me


is let's talk about it right let's


make sure we understand what it is


um but frankly and call me biased


founding uh executive director of


harvard's sology engineering research


program here um


we ought to do a lot more fundamental


research into the technology before we


have any of these conversations about


should we deploy sology engineering at a


global scale never mind that nobody


knows who the wii is in this


conversation so if i can


be so bold


to contrast this to the biology solution


we can already


if we


enhance


photosynthesis in plants by twofold


we can already say how much carbon


drawdown there will be


and


so


you know it's it's a technology


that


i would say is ahead


and nature already does it


and


i'm not saying we shouldn't do all


things which you made a very good point


of but i think


you know there are technologies


that they won't complete the solution


but they're good to go in term they need


more research people are working on it


so that's good but um it's not all hope


it there is stuff happening


more should be happening for sure


and to be clear it's not either or right


we need vegas can't be choosers right we


mean we need it all


and we needed it yesterday


that's yeah that's a good point um


so okay


so i i guess i guess that completes our


q a all right


um yeah so i think that's that's a good


statement to end on


beggars can't be choosers um so okay


well you know many many many thanks to


the four of you


for your wonderful talks and q a and and


the audience


Resources:
Tags:

Similar videos

2CUTURL

Created in 2013, 2CUTURL has been on the forefront of entertainment and breaking news. Our editorial staff delivers high quality articles, video, documentary and live along with multi-platform content.

© 2CUTURL. All Rights Reserved.