’The development of full artificial intelligence could spell the end of the human race’

 – Stephen Hawking

This opening quote does not exactly instill a positive outlook, but it’s a perfect setting for exploring why this argument exists.

AI is the buzzword of the moment, and while we may all be using it, do we have a good understanding of it? And what stance is the European Union taking? In a world where we are still struggling to convince people that media literacy is important, is AI facing a similar battle?

Firstly, lets acknowledge that Hawking isn’t the only one with this view. Elon Musk more recently stated that AI is the ‘’biggest existential threat’’ and many prominent researchers feel the same. And one would feel that these highly educated intelligent figures might have a better idea of these things than us. These concerns have been around since the introduction of computers, but it has become a more contentious topic with the advance of ‘machine-learning techniques. This has given us a clearer idea of the potential of AI.

One of the biggest problems with trying to understand AI is the ambiguity of the word itself. It can refer to so many things it is hard to grasp. So, let’s try to break it down.

In simple terms, Artificial Intelligence is the goal of making a computer capable of being ‘intelligent’. There are examples everywhere already from the likes of Siri and Alexa. Researchers have narrowed it down to two categories ‘’Narrow AI’’ and ‘’General AI’’. Narrow AI focuses on computers that are better then humans in specific cases like playing chess and General AI, which we don’t actually have yet, will out do humans in many domains. Narrow AI is what will decide what you see in a Google search or on your Facebook newsfeed.

This type of AI brings up a salient point surround the bias of this technology and therefore ethical questions. The bias is not only towards information but also to people. The problem today comes from ‘’the disconnect between what we tell our systems to do and what we actually want them to do’’ (Piper, ‘’The case for taking AI seriously as a threat to humanity’’, 2018). For example, police departments use algorithms in face recognition and that can go very wrong, especially when it comes to dark-skinned people and can lead to low-income and minority communities facing unfair profiling (MIT Technology Review, 2019). A recent case in the US also highlights the problems when an 18-year-old was accused of stealing iPhone in four different stores. He believes that Apple’s algorithm linked video footage of the thief with his name leading to the arrests. His ID (with no photo) had previously been stolen and presumably was being used by the thief who looked nothing like him. However, the algorithm had been trained to connect the name with the footage (BBC, 2019).

While these are technical issues it seems the unknown future of AI is what may make people nervous.

So, what is the EU doing in this area? Well, apparently, we have a robust AI industry here and there is even a ‘European AI Alliance’ forum to engage and encourage discussion on relevant topics. 25 European countries also signed a ‘Declaration of cooperation on Artificial Intelligence’ in 2018, highlighting the importance of working on a European level and not just a national one. Following this, a high level expert group (including representatives from Google, Bosch, Orange and IBM to name just a few…) was formed to gather and develop guidelines for AI Ethics leading to a ‘Ethical Guidelines for Trustworthy AI’’ published in April 2019.

The approach taken appears to focus on the ability of AI to enable and boost the research capacity while also ensuring that AI works to the advantage of the citizens.

So, in using AI to our advantage, part of the European AI Alliance is the Digital Single Market which is something we hear about a lot. Adopted in 2015, it encourages digital opportunities and sets our a Digital Europe programme with 9.2billion of investment until 2027. Of this, 2.5 billion is planned for spreading AI across Europe. A lot of this is visible through Digital Hubs that are sprouting up in capital cities.

So while all of this is positive, in a protective move, the European Parliament in 2017 adopted on Civil Law Rules on Robotics in an open consultation to address ethical, legal and social issues relating to robotics and AI developments. The results of this were published and led to a decision to regulate the development of AI, putting data protection and digital rights to the fore. This consequently let to the adoption of the General Data Protection Regulation (GDPR).

So, it would seem that there is clear steps being taken to prepare for inevitable changes that are occurring. These changes are exciting. However, as with any society there is a clear gap in those that will be able to take advantage of such changes. As I mentioned, it seems as though we are still trying to convince schools and teachers that teaching media literacy and citizenship is important. Will asking them to also incorporate ethical guidelines about AI push them over the edge? Once again it appears that the top down approach is being implemented rather than the bottom up that is needed also. Europe can only set the guidelines; in such cases it is down to national governments to take the hint and develop their own contingency plans.

Sources:

https://www.vox.com/future-perfect/2018/12/21/18126576/ai-artificial-intelligence-machine-learning-safety-alignment

https://www.technologyreview.com/s/612775/algorithms-criminal-justice-ai/

https://futureoflife.org/ai-policy-european-union/?cn-reloaded=1

https://www.bbc.com/news/technology-48022890

Share This Post, Choose Your Platform!

’The development of full artificial intelligence could spell the end of the human race’

 – Stephen Hawking

This opening quote does not exactly instill a positive outlook, but it’s a perfect setting for exploring why this argument exists.

AI is the buzzword of the moment, and while we may all be using it, do we have a good understanding of it? And what stance is the European Union taking? In a world where we are still struggling to convince people that media literacy is important, is AI facing a similar battle?

Firstly, lets acknowledge that Hawking isn’t the only one with this view. Elon Musk more recently stated that AI is the ‘’biggest existential threat’’ and many prominent researchers feel the same. And one would feel that these highly educated intelligent figures might have a better idea of these things than us. These concerns have been around since the introduction of computers, but it has become a more contentious topic with the advance of ‘machine-learning techniques. This has given us a clearer idea of the potential of AI.

One of the biggest problems with trying to understand AI is the ambiguity of the word itself. It can refer to so many things it is hard to grasp. So, let’s try to break it down.

In simple terms, Artificial Intelligence is the goal of making a computer capable of being ‘intelligent’. There are examples everywhere already from the likes of Siri and Alexa. Researchers have narrowed it down to two categories ‘’Narrow AI’’ and ‘’General AI’’. Narrow AI focuses on computers that are better then humans in specific cases like playing chess and General AI, which we don’t actually have yet, will out do humans in many domains. Narrow AI is what will decide what you see in a Google search or on your Facebook newsfeed.

This type of AI brings up a salient point surround the bias of this technology and therefore ethical questions. The bias is not only towards information but also to people. The problem today comes from ‘’the disconnect between what we tell our systems to do and what we actually want them to do’’ (Piper, ‘’The case for taking AI seriously as a threat to humanity’’, 2018). For example, police departments use algorithms in face recognition and that can go very wrong, especially when it comes to dark-skinned people and can lead to low-income and minority communities facing unfair profiling (MIT Technology Review, 2019). A recent case in the US also highlights the problems when an 18-year-old was accused of stealing iPhone in four different stores. He believes that Apple’s algorithm linked video footage of the thief with his name leading to the arrests. His ID (with no photo) had previously been stolen and presumably was being used by the thief who looked nothing like him. However, the algorithm had been trained to connect the name with the footage (BBC, 2019).

While these are technical issues it seems the unknown future of AI is what may make people nervous.

So, what is the EU doing in this area? Well, apparently, we have a robust AI industry here and there is even a ‘European AI Alliance’ forum to engage and encourage discussion on relevant topics. 25 European countries also signed a ‘Declaration of cooperation on Artificial Intelligence’ in 2018, highlighting the importance of working on a European level and not just a national one. Following this, a high level expert group (including representatives from Google, Bosch, Orange and IBM to name just a few…) was formed to gather and develop guidelines for AI Ethics leading to a ‘Ethical Guidelines for Trustworthy AI’’ published in April 2019.

The approach taken appears to focus on the ability of AI to enable and boost the research capacity while also ensuring that AI works to the advantage of the citizens.

So, in using AI to our advantage, part of the European AI Alliance is the Digital Single Market which is something we hear about a lot. Adopted in 2015, it encourages digital opportunities and sets our a Digital Europe programme with 9.2billion of investment until 2027. Of this, 2.5 billion is planned for spreading AI across Europe. A lot of this is visible through Digital Hubs that are sprouting up in capital cities.

So while all of this is positive, in a protective move, the European Parliament in 2017 adopted on Civil Law Rules on Robotics in an open consultation to address ethical, legal and social issues relating to robotics and AI developments. The results of this were published and led to a decision to regulate the development of AI, putting data protection and digital rights to the fore. This consequently let to the adoption of the General Data Protection Regulation (GDPR).

So, it would seem that there is clear steps being taken to prepare for inevitable changes that are occurring. These changes are exciting. However, as with any society there is a clear gap in those that will be able to take advantage of such changes. As I mentioned, it seems as though we are still trying to convince schools and teachers that teaching media literacy and citizenship is important. Will asking them to also incorporate ethical guidelines about AI push them over the edge? Once again it appears that the top down approach is being implemented rather than the bottom up that is needed also. Europe can only set the guidelines; in such cases it is down to national governments to take the hint and develop their own contingency plans.

Sources:

https://www.vox.com/future-perfect/2018/12/21/18126576/ai-artificial-intelligence-machine-learning-safety-alignment

https://www.technologyreview.com/s/612775/algorithms-criminal-justice-ai/

https://futureoflife.org/ai-policy-european-union/?cn-reloaded=1

https://www.bbc.com/news/technology-48022890

Share This Post, Choose Your Platform!

’The development of full artificial intelligence could spell the end of the human race’

 – Stephen Hawking

This opening quote does not exactly instill a positive outlook, but it’s a perfect setting for exploring why this argument exists.

AI is the buzzword of the moment, and while we may all be using it, do we have a good understanding of it? And what stance is the European Union taking? In a world where we are still struggling to convince people that media literacy is important, is AI facing a similar battle?

Firstly, lets acknowledge that Hawking isn’t the only one with this view. Elon Musk more recently stated that AI is the ‘’biggest existential threat’’ and many prominent researchers feel the same. And one would feel that these highly educated intelligent figures might have a better idea of these things than us. These concerns have been around since the introduction of computers, but it has become a more contentious topic with the advance of ‘machine-learning techniques. This has given us a clearer idea of the potential of AI.

One of the biggest problems with trying to understand AI is the ambiguity of the word itself. It can refer to so many things it is hard to grasp. So, let’s try to break it down.

In simple terms, Artificial Intelligence is the goal of making a computer capable of being ‘intelligent’. There are examples everywhere already from the likes of Siri and Alexa. Researchers have narrowed it down to two categories ‘’Narrow AI’’ and ‘’General AI’’. Narrow AI focuses on computers that are better then humans in specific cases like playing chess and General AI, which we don’t actually have yet, will out do humans in many domains. Narrow AI is what will decide what you see in a Google search or on your Facebook newsfeed.

This type of AI brings up a salient point surround the bias of this technology and therefore ethical questions. The bias is not only towards information but also to people. The problem today comes from ‘’the disconnect between what we tell our systems to do and what we actually want them to do’’ (Piper, ‘’The case for taking AI seriously as a threat to humanity’’, 2018). For example, police departments use algorithms in face recognition and that can go very wrong, especially when it comes to dark-skinned people and can lead to low-income and minority communities facing unfair profiling (MIT Technology Review, 2019). A recent case in the US also highlights the problems when an 18-year-old was accused of stealing iPhone in four different stores. He believes that Apple’s algorithm linked video footage of the thief with his name leading to the arrests. His ID (with no photo) had previously been stolen and presumably was being used by the thief who looked nothing like him. However, the algorithm had been trained to connect the name with the footage (BBC, 2019).

While these are technical issues it seems the unknown future of AI is what may make people nervous.

So, what is the EU doing in this area? Well, apparently, we have a robust AI industry here and there is even a ‘European AI Alliance’ forum to engage and encourage discussion on relevant topics. 25 European countries also signed a ‘Declaration of cooperation on Artificial Intelligence’ in 2018, highlighting the importance of working on a European level and not just a national one. Following this, a high level expert group (including representatives from Google, Bosch, Orange and IBM to name just a few…) was formed to gather and develop guidelines for AI Ethics leading to a ‘Ethical Guidelines for Trustworthy AI’’ published in April 2019.

The approach taken appears to focus on the ability of AI to enable and boost the research capacity while also ensuring that AI works to the advantage of the citizens.

So, in using AI to our advantage, part of the European AI Alliance is the Digital Single Market which is something we hear about a lot. Adopted in 2015, it encourages digital opportunities and sets our a Digital Europe programme with 9.2billion of investment until 2027. Of this, 2.5 billion is planned for spreading AI across Europe. A lot of this is visible through Digital Hubs that are sprouting up in capital cities.

So while all of this is positive, in a protective move, the European Parliament in 2017 adopted on Civil Law Rules on Robotics in an open consultation to address ethical, legal and social issues relating to robotics and AI developments. The results of this were published and led to a decision to regulate the development of AI, putting data protection and digital rights to the fore. This consequently let to the adoption of the General Data Protection Regulation (GDPR).

So, it would seem that there is clear steps being taken to prepare for inevitable changes that are occurring. These changes are exciting. However, as with any society there is a clear gap in those that will be able to take advantage of such changes. As I mentioned, it seems as though we are still trying to convince schools and teachers that teaching media literacy and citizenship is important. Will asking them to also incorporate ethical guidelines about AI push them over the edge? Once again it appears that the top down approach is being implemented rather than the bottom up that is needed also. Europe can only set the guidelines; in such cases it is down to national governments to take the hint and develop their own contingency plans.

Sources:

https://www.vox.com/future-perfect/2018/12/21/18126576/ai-artificial-intelligence-machine-learning-safety-alignment

https://www.technologyreview.com/s/612775/algorithms-criminal-justice-ai/

https://futureoflife.org/ai-policy-european-union/?cn-reloaded=1

https://www.bbc.com/news/technology-48022890

Share This Post, Choose Your Platform!