By Dr. Sven Muehlenbrock, Partner and Head of Risk Advisory, and Dr. Markus Lamest, Senior Adviser – Risk Advisory, KPMG Luxembourg.

When I learned to ride a bike, one hot summer day in the 1980s, I believe I got the hang of it after only a few bruises and one scraped knee. Learning things is easier in youth because, during childhood, the neuroplasticity of our brains provides us with significant capacities to learn—we form and recognize synaptic connections eagerly, e.g. in response to teachers or new experiences. And now, three decades later, humans are getting better and better at reproducing this ability in computers. Indeed, artificial intelligence (AI) is beginning to transform our workplaces and society as we know it: billions are affected by AI either directly, as developers or users, or indirectly, merely through our work environments and everyday lives. For instance, AI may control the traffic on your commute, or your preferred online shopping portal. This article will delve into AI’s impact on our lives and discuss which stumbling blocks—some of which lead to unfounded fears in society—are still in the way of a bright, AI-enabled world. From a global perspective, Luxembourg and its visionaries are in the right position to play a substantial and pioneering role in making this bright, technological vision real.

Endless possibilities

AI is widely used as an umbrella term for technologies that enable computers to perform tasks usually associated with human intelligence.[1] Sub-disciplines would include machine learning (computers learning and making connections based on collections of data), natural language processing (computers having natural dialogues with humans) or vision (computers recognizing images), to name only a few. The use cases are almost endless: in healthcare, universities are working with tech companies on AI-enabled tools to diagnose and treat mental illness;[2] in the public sector, AI applications support teachers in grading student papers; in supply chain management, AI is used to optimize warehouse efficiency; in retail banking, AI accelerates and improves credit decisions; in marketing, AI can automate the set-up and monitoring of social media campaigns.

AI is here to stay—and to grow. Today, 80% of businesses are investing in AI, with one in three entrepreneurs planning to invest more within the next 36 months just to keep up with competitors.[3] As a new must-have, AI is putting pressure on companies to quickly adapt, which can mean seeking out architects skilled in this technology. However, since previously there was no widespread need for such profiles, these experts remain rare and expensive.

 

Dark clouds on a bright horizon

As bright as the AI-enabled future is, three major stumbling blocks particularly stand out, which together leave many skeptics uncomfortable with having robots in their daily lives. While these barriers will certainly hinder AI adoption, they could also be opportunities for Luxembourg and its visionaries.

1. AI and workforce. Undeniably, one of AI’s biggest potentials and current uses is to automate processes, replacing manual tasks with digital labor. Not only does this speed up productivity, but it also limits human exposure to sensitive corporate data and reduces the need for seasonal labor.[4] Some futurists follow these advents to doomsday scenarios where humans fight computers for jobs, or to more modest visions where a lucky core of employees benefit from AI. Contrary to these scenarios, however, a recent Gartner study has found that, by 2020, AI will have created 2.3 million jobs, while swallowing up 1.8 million,[5] leaving a positive balance of half a million new jobs. Emphasizing the likelihood of a positive scenario for most of society and reducing unfounded fears are paramount for governments, as they prepare their workforces for the skill shifts that AI demands. Luxembourg is a likely force to lead these developments, due to the long-standing dominance of its tertiary sector, the heft of its human capital development, and its workforce’s cultural variety and openness. 

2. AI and cyber risk. This summer, again, several data breaches and hacker attacks became public. Most of them were in industries highly dependent on the processing of sensitive customer data, and thus on the trust of consumers, like the airline industry. Concerns over safety are legitimate, given that international cloud and data transmission agreements are being signed by countries while many organizations still hesitate to upload their data even to a locally managed cloud. As e-payment is ever more indispensable in our daily shopping experiences, establishing legislative frameworks that provide peace of mind on data security issues should be high on national agendas, along with effectively communicating these frameworks to organizations and end-users. The EU’s forward-looking General Data Protection Regulation (GDPR) has for instance begun well in meeting this goal, even impressing international thinkers.[6] Several e-payment giants (PayPal, Amazon) are deepening their roots in Luxembourg, suggesting that it is the right moment for Luxembourg to take a pioneering role and to set grounds for international standards.

3. AI and ethics. As we equip computers with more and more decision-making power, the voices of caution in our heads grow louder: “Can we trust computers to make the ‘right’ decisions?” A recent study revealed that only 35% of IT and business decision-makers have a high level of trust in their own organization’s analytics.[7] Recently, for example, a large wholesaler[8] was forced to stop using its AI-supported recruiting engine because, it turned out, the tool preferred CVs from men over those from women. Questions around AI and ethics are probably the hardest to answer. One hotly debated question is: who takes responsibility when AI decides wrongly? According to the above study, 62% of people think that, in a case of a non-fatal car accident, the blame would lie with the organization that developed the software. This area more than any other requires nations to find answers that are acceptable for the whole ecosystem—meaning for AI developers as much as for society at large. In EU history, Luxembourg has many times served as a place where consensus has been reached even in difficult circumstances involving the interests of many. This time, the need for leadership in the field of AI and ethics could spell a competitive advantage for Luxembourg, and thus the EU as a whole.  

 

“Innovation is inherently messy, nonlinear, and iterative.”[9]

These three stumbling blocks put burdens on innovators already facing the challenges inherent in entrepreneurship and original thinking. They differ, however, in that they cannot be overcome merely at company or individual levels. It is therefore the leaders and visionaries of Luxembourg who are charged with spearheading an AI playing field acceptable to everybody. As mentioned above, a major step in this direction is the General Data Protection Regulation (GDPR). While the exact forms of the regulation and its penalty system are certainly debatable—evident in the thousands of amendments already suggested on its initial proposal—this piece of legislation heralds a more conscious form of data use and signals to society that governments recognize and are acting on concerns about data privacy and transparency. Right now, a small but growing number of governments, technology companies, and international organizations are developing global AI trust and ethics protocols whose aim is to regulate interconnected AI-driven systems and products. At a startup conference earlier this year, Prime Minister Xavier Bettel noted that the pivotal factor of Luxembourg’s success is “to be ready for the next [big] thing.”[10] In a recent New York Times interview, Google CEO Sundar Pichai admitted that “there is nothing inherent that says Silicon Valley will always be the most innovative place in the world.”[11] Taken together, these sentiments may suggest that Luxembourg is ready (and starting) to grab the innovation spotlight. The recent opening of the House of Startups, a hub for innovators, accelerators, experts, and corporates, is yet another development showing that the nation’s entrepreneurial spirit is healthy—and hungry. It is now time to develop real answers to unresolved questions on our future workforce, the security of our data, and an ethical (AI) agenda, which would concretely pave the way for sustainable AI adoption.

 

[1] E.g. KPMG AG Wirtschaftsprüfungsgesellschaft (2018), “Rethinking the value chain. A study on AI, humanoids and robots”

[2] Garg, P. and S. Glick (2018), “AI’s Potential to Diagnose and Treat Mental Illness”

[3] Teradata (2017), as cited in KPMG AG Wirtschaftsprüfungsgesellschaft (2018), “Rethinking the value chain. A study on AI, humanoids and robots”

[4] KPMG Luxembourg (2017), “The robotic revolution – Business transformation through digital labour”

[5] Gartner (2017), “Gartner Says By 2020, Artificial Intelligence Will Create More Jobs Than It Eliminates”

[6] Toby Walsh, a leading AI researcher (University of Sydney) and Viviane Reding (European Parliament), “Artificial Intelligence: Truth or Dare” conference, Luxembourg University, 19 Sept. 2018. 

[7] KPMG International (2018), “Guardians of Trust – Who is responsible for trusted analytics in the digital age?”

[8] The company shall remain unnamed due to fairness reasons—after all, this is the downside of being a pioneer.

[9] Quote by G. Day (1997)

[10] Silicon Luxembourg (11 May 2018), “Be Ready for the Next Thing!”

[11] The New York Times (8 Nov. 2018), “Sundar Pichai of Google: ‘Technology Doesn’t Solve Humanity’s Problems’”


Publié le 23 avril 2019