Actuarial

Create the policy before the technology say Artificial Intelligence experts

By - World Infrastructure Journal

Create the policy before the technology say Artificial Intelligence experts

Although Artificial Intelligence (AI) has become commonplace at both home and the workplace, a feeling of uncertainty from the public continues to persist. In a 2017 study from the International Journal of Social Robotics, 71.6 of respondents per cent expressed some degree of fear or apprehension towards autonomous robots and artificial intelligence. In a more recent survey of the public sector by UKCloud, 45 per cent of respondents said they felt uncomfortable using cloud technologies “to safely and easily share data with partners and other agencies.” PPP therefore met with Richard Zak of Microsoft and Jack Williams of large technology group, Hexagon, to discuss the pressing need to clarify exactly what AI is, and how it will impact its users.


Majorly important, but mistrusted 

While AI continues to play an ever-increasing role in our daily lives, concerns over the technology still linger. Some of them are rather far-flung, such as fears that AI may turn against humanity (à la Terminator), but others are more rooted in a completely understandable confusion over what AI is, how it works, and what it will mean for the people who use it. If unaddressed, this trepidation will not only slow the adoption and uptake of newer AI technologies, but will also hamper their utility. Users who either do not understand or do not trust their software will not use it to its fullest potential, minimising both the returns for their employer and the AI itself (as the less data produced the less efficient the software). According to UKCloud, over 60 per cent of public sector employees claimed to still keep “some of their data stored in paper-based archives. ”

This is rather disappointing as the potential for AI is immense. As Williams pointed out: “the potential for A.I. in the next 10 years is to really help augment the users, citizens, and people at the city government or in public safety. It's already ubiquitous to a degree – your cell phone in your pocket and with all its technologies have some level of A.I., machine learning, or algorithms embedded in them – but I think where you're really going to see it help us do our jobs better is when it can provide us with more information, filter out the noise, and augment us (meaning it gives us greater capabilities). ”

Giving a concrete example, Zak pointed out that emergency calls, “used to be a phone call, with a dispatcher listening over the phone, trying to understand what's going on. Well, now that call is bringing lots of data from multiple different sources. ” That data, Zak claimed “can help the dispatcher create a picture of what's going on, but only if they can decipher it. So, what’s important is having a system that learns from past incidents to prioritise things for that dispatcher. ” After all, “if the system itself has processed all the data from so many incidents in the past, it's able to tee up resources and plan next steps for that dispatcher.

As such, there is a pressing need for all parties involved in the implementation and rollout of technology to clarify what AI is, and to make it more accessible – lest the transformative potential of AI be squandered. There are already efforts being made on this front from a technical standpoint. UKCloud, in particular, is using it to standardise tooling and ease the user experience and similar efforts are being undertaken by the majority of cloud providers featured on the GOV.UK Digital Marketplace. However, while this is promising news regarding the easing of the user experience, these efforts will not solve the ultimate issue facing AI – a lack of trust.


A need for clarity

Solving that issue will require both the AI technology itself, and the policies governing it, to be transparent and clear. And it is possible. “It's a good business model being sustainable, being ethical, being transparent,” Williams asserted, suggesting that third parties should validate data sets to ensure they are not biased. He explained "You can make money, be profitable, provide good technology, and also protect privacy and individual rights. ”

As Zak pointed out, “what effective organisations do is create the policy before they're even creating the technology. ” Continuing, he noted that “if you've got a deep understanding of the problem that you're trying to solve, what is missing in the technology parts of the of the value chain" you can then “craft the policy so that you're delivering A.I. capabilities to address that problem. ” In other words, “you build the specific scenarios. You create the policies to guide your development so that what you're going to deliver is actually supporting people. ”

While many these policies would have to change dependent on the system and project, Zak identified 6 core principles that any project ought to abide by:

  1. AI has to operate reliably even when circumstances change 
  2. It has to treat all people the same, so biases must be built out of AI models explicitly
  3. Information must be protected from attacks, so it must be ensured that AI systems are built to make use of data while keeping the privacy and security of that data secure
  4. AI should connect people and ought to be intentionally built to intentionally include, rather than unintentionally exclude.
  5. It needs to be explainable and understandable to all, especially to create transparency around any potential issues around fairness, privacy, biases, or unintended outcomes 
  6. All projects need to understand that AI cannot take the place of, nor the accountability of, the person

On the last point of accountability, Zak underscored that “you have to have someone in the organisation who is ultimately responsible for A.I. broadly… a compliance officer or an ethics officer or somebody who is the ultimate authority and responsible for oversight. ” Fortunately, a lot of organisations are starting to do this, creating committees and “working groups that help them monitor these A.I. systems. ” Continuing forward in this way will then be crucial should AI hope to win over its doubters and prove its potential. As such, alongside ensuring greater transparency, it is essential for both the user and the system itself that when it comes to policy, organisations and governments are “intentional about it. Rather than saying, well, we bought a new gadget, plug it in the wall and we're done, you have to have some group of people ultimately responsible for how it's implemented. ” After all, AI relies as much on the user as it does on the system itself – meaning policy solutions need to be built for both


#WIJNews #EliotGillings #Microsoft #Hexagon #AI