Conclusions drawn from the discussion with Letitia Parcalabescu – 18th of November (EAVI Conversations 2021)
Futuristic movies generally depict technology as an independent entity from humankind, able to make decisions and ultimately have the capacity to rule the world and the universe.
Whether it will go this way or not, if we allowed ourselves the possibility to just look at the present, we would notice that, despite its name, AI is not more intelligent than us, notwithstanding the alarmistic discussions about it. Our speaker Letitia Parcalabescu affirmed that “If intelligence has something to do with how quickly the machine learns something, with adaptivity and energy efficiency, meanwhile it is also solving tasks, then the algorithms we have today fall really short on these characteristics”. It might seem surprising, but to function AI needs so many examples, classifications and links that our brains wouldn’t need to come to the same conclusions.
Although in a previous session at EAVI Conversations we already addressed the definition of AI in a positive way, e.g. saying what is AI, we didn’t focus on what AI is not, dismantling people’s most common misconceptions. AI is not robots and hardware; it is rather something we cannot see. AI hides in software and automatic algorithms, which in some rare cases could also control hardware, but the machines themselves are not part of the Artificial Intelligence domain. They could be considered adds-on.
We are in a trial period, and this doesn’t only mean that there are no safety measures in place, but that even AI itself is not ready yet and its potentialities have not been developed to the fullest. Virtual reality is still at an embryonal stage and the famous Alexa cannot be considered completely useful, due to some technical constraints and to other electrical appliances which still cannot be connected to it. Additionally, AI systems are complex and there are long chains of dependency. Overall, it is undeniable that AI could be extremely useful for people with disabilities even at this stage.
Focusing on the benefits technology has, we could discover that even if Alexa for us now is slightly more than a funny microphone to talk to, which is also able to reply, it might radically change and improve household interactions for visually impaired people. This reflection might sound predictable, but at EAVI we have the feeling that discussion about the future of AI never focus on the improvements this kind of technology could bring to our lifestyle.
Letitia is really confident AI won’t increase the number of issues and risks compared to the ones we are facing today, as she firmly believes in the law of conservation[1] of the number of problems. Therefore, their nature will change but not their number (this deterministic view entails that there is no hope for a decrease either!). According to her vision, we could probably witness a shift from matters laying at the bottom in the hierarchy of human needs[2] (which technology could solve, as said above) to matters at the top of the pyramid, involving less pressing human needs or things that in a way can be considered not essential or luxurious. The goal is for Artificial Intelligence to help us to notably increase our life standards.
The real question is: are we ready for AI? What we know for sure, it’s that the majority of us cannot grasp it and formal education doesn’t help us. AI is hard to understand and not easy to explain, as it concerns domains and subjects in which we don’t necessarily have great expertise or knowledge, like statistics, large numbers, coding, etc. Due to this, it would be preferable to have some developments in school curricula and try to teach more basics of AI, like automatic algorithms, how recommendations on different platforms work, and coding for instance, rather than traditional advanced mathematics or physics, which we are unlikely to ever use or come across (and, even if we did, really complicated problems and calculus could be solved by machines anyway). It seems unfair that, in an increasingly digital world, everything concerning technology is left to the individual willingness and interest to learn.
The really concerning aspect analysed during the session was the development of AI, which is mainly created by technical experts, without investing in the multidisciplinarity of the team involved, e.g. psychologists, sociologists and, in general, the field of humanities. This of course happens in some cases, but not as much as it is expected. In the future, the professionals in the field hope for more collaboration, especially to avoid biases to be reproduced by this kind of technology, which at the moment is still unknown territory and more and more research is being done in this area.
It Is true improvements need to be done, but AI can undoubtedly be used for good and for social purposes; let’s think about virtual reality being used for therapeutic purposes and curing depression. Letitia wishes AI one day could be developed tailoring it to specific cases and communities’ needs, as currently it only has a one-size-fits-all configuration, without adapting to human necessities and requirements.
[1] The law of conservation is a fundamental principle of classical physics stating that matter cannot be created or destroyed in an isolated system.
[2] https://en.wikipedia.org/wiki/Maslow’s_hierarchy_of_needs
Are you interested in watching the different sessions of the EAVI Conversations 2021?
Conclusions drawn from the discussion with Letitia Parcalabescu – 18th of November (EAVI Conversations 2021)
Futuristic movies generally depict technology as an independent entity from humankind, able to make decisions and ultimately have the capacity to rule the world and the universe.
Whether it will go this way or not, if we allowed ourselves the possibility to just look at the present, we would notice that, despite its name, AI is not more intelligent than us, notwithstanding the alarmistic discussions about it. Our speaker Letitia Parcalabescu affirmed that “If intelligence has something to do with how quickly the machine learns something, with adaptivity and energy efficiency, meanwhile it is also solving tasks, then the algorithms we have today fall really short on these characteristics”. It might seem surprising, but to function AI needs so many examples, classifications and links that our brains wouldn’t need to come to the same conclusions.
Although in a previous session at EAVI Conversations we already addressed the definition of AI in a positive way, e.g. saying what is AI, we didn’t focus on what AI is not, dismantling people’s most common misconceptions. AI is not robots and hardware; it is rather something we cannot see. AI hides in software and automatic algorithms, which in some rare cases could also control hardware, but the machines themselves are not part of the Artificial Intelligence domain. They could be considered adds-on.
We are in a trial period, and this doesn’t only mean that there are no safety measures in place, but that even AI itself is not ready yet and its potentialities have not been developed to the fullest. Virtual reality is still at an embryonal stage and the famous Alexa cannot be considered completely useful, due to some technical constraints and to other electrical appliances which still cannot be connected to it. Additionally, AI systems are complex and there are long chains of dependency. Overall, it is undeniable that AI could be extremely useful for people with disabilities even at this stage.
Focusing on the benefits technology has, we could discover that even if Alexa for us now is slightly more than a funny microphone to talk to, which is also able to reply, it might radically change and improve household interactions for visually impaired people. This reflection might sound predictable, but at EAVI we have the feeling that discussion about the future of AI never focus on the improvements this kind of technology could bring to our lifestyle.
Letitia is really confident AI won’t increase the number of issues and risks compared to the ones we are facing today, as she firmly believes in the law of conservation[1] of the number of problems. Therefore, their nature will change but not their number (this deterministic view entails that there is no hope for a decrease either!). According to her vision, we could probably witness a shift from matters laying at the bottom in the hierarchy of human needs[2] (which technology could solve, as said above) to matters at the top of the pyramid, involving less pressing human needs or things that in a way can be considered not essential or luxurious. The goal is for Artificial Intelligence to help us to notably increase our life standards.
The real question is: are we ready for AI? What we know for sure, it’s that the majority of us cannot grasp it and formal education doesn’t help us. AI is hard to understand and not easy to explain, as it concerns domains and subjects in which we don’t necessarily have great expertise or knowledge, like statistics, large numbers, coding, etc. Due to this, it would be preferable to have some developments in school curricula and try to teach more basics of AI, like automatic algorithms, how recommendations on different platforms work, and coding for instance, rather than traditional advanced mathematics or physics, which we are unlikely to ever use or come across (and, even if we did, really complicated problems and calculus could be solved by machines anyway). It seems unfair that, in an increasingly digital world, everything concerning technology is left to the individual willingness and interest to learn.
The really concerning aspect analysed during the session was the development of AI, which is mainly created by technical experts, without investing in the multidisciplinarity of the team involved, e.g. psychologists, sociologists and, in general, the field of humanities. This of course happens in some cases, but not as much as it is expected. In the future, the professionals in the field hope for more collaboration, especially to avoid biases to be reproduced by this kind of technology, which at the moment is still unknown territory and more and more research is being done in this area.
It Is true improvements need to be done, but AI can undoubtedly be used for good and for social purposes; let’s think about virtual reality being used for therapeutic purposes and curing depression. Letitia wishes AI one day could be developed tailoring it to specific cases and communities’ needs, as currently it only has a one-size-fits-all configuration, without adapting to human necessities and requirements.
[1] The law of conservation is a fundamental principle of classical physics stating that matter cannot be created or destroyed in an isolated system.
[2] https://en.wikipedia.org/wiki/Maslow’s_hierarchy_of_needs
Are you interested in watching the different sessions of the EAVI Conversations 2021?
Conclusions drawn from the discussion with Letitia Parcalabescu – 18th of November (EAVI Conversations 2021)
Futuristic movies generally depict technology as an independent entity from humankind, able to make decisions and ultimately have the capacity to rule the world and the universe.
Whether it will go this way or not, if we allowed ourselves the possibility to just look at the present, we would notice that, despite its name, AI is not more intelligent than us, notwithstanding the alarmistic discussions about it. Our speaker Letitia Parcalabescu affirmed that “If intelligence has something to do with how quickly the machine learns something, with adaptivity and energy efficiency, meanwhile it is also solving tasks, then the algorithms we have today fall really short on these characteristics”. It might seem surprising, but to function AI needs so many examples, classifications and links that our brains wouldn’t need to come to the same conclusions.
Although in a previous session at EAVI Conversations we already addressed the definition of AI in a positive way, e.g. saying what is AI, we didn’t focus on what AI is not, dismantling people’s most common misconceptions. AI is not robots and hardware; it is rather something we cannot see. AI hides in software and automatic algorithms, which in some rare cases could also control hardware, but the machines themselves are not part of the Artificial Intelligence domain. They could be considered adds-on.
We are in a trial period, and this doesn’t only mean that there are no safety measures in place, but that even AI itself is not ready yet and its potentialities have not been developed to the fullest. Virtual reality is still at an embryonal stage and the famous Alexa cannot be considered completely useful, due to some technical constraints and to other electrical appliances which still cannot be connected to it. Additionally, AI systems are complex and there are long chains of dependency. Overall, it is undeniable that AI could be extremely useful for people with disabilities even at this stage.
Focusing on the benefits technology has, we could discover that even if Alexa for us now is slightly more than a funny microphone to talk to, which is also able to reply, it might radically change and improve household interactions for visually impaired people. This reflection might sound predictable, but at EAVI we have the feeling that discussion about the future of AI never focus on the improvements this kind of technology could bring to our lifestyle.
Letitia is really confident AI won’t increase the number of issues and risks compared to the ones we are facing today, as she firmly believes in the law of conservation[1] of the number of problems. Therefore, their nature will change but not their number (this deterministic view entails that there is no hope for a decrease either!). According to her vision, we could probably witness a shift from matters laying at the bottom in the hierarchy of human needs[2] (which technology could solve, as said above) to matters at the top of the pyramid, involving less pressing human needs or things that in a way can be considered not essential or luxurious. The goal is for Artificial Intelligence to help us to notably increase our life standards.
The real question is: are we ready for AI? What we know for sure, it’s that the majority of us cannot grasp it and formal education doesn’t help us. AI is hard to understand and not easy to explain, as it concerns domains and subjects in which we don’t necessarily have great expertise or knowledge, like statistics, large numbers, coding, etc. Due to this, it would be preferable to have some developments in school curricula and try to teach more basics of AI, like automatic algorithms, how recommendations on different platforms work, and coding for instance, rather than traditional advanced mathematics or physics, which we are unlikely to ever use or come across (and, even if we did, really complicated problems and calculus could be solved by machines anyway). It seems unfair that, in an increasingly digital world, everything concerning technology is left to the individual willingness and interest to learn.
The really concerning aspect analysed during the session was the development of AI, which is mainly created by technical experts, without investing in the multidisciplinarity of the team involved, e.g. psychologists, sociologists and, in general, the field of humanities. This of course happens in some cases, but not as much as it is expected. In the future, the professionals in the field hope for more collaboration, especially to avoid biases to be reproduced by this kind of technology, which at the moment is still unknown territory and more and more research is being done in this area.
It Is true improvements need to be done, but AI can undoubtedly be used for good and for social purposes; let’s think about virtual reality being used for therapeutic purposes and curing depression. Letitia wishes AI one day could be developed tailoring it to specific cases and communities’ needs, as currently it only has a one-size-fits-all configuration, without adapting to human necessities and requirements.
[1] The law of conservation is a fundamental principle of classical physics stating that matter cannot be created or destroyed in an isolated system.
[2] https://en.wikipedia.org/wiki/Maslow’s_hierarchy_of_needs