Appendix

IA research summary

This appendix contains a summary of my IA thesis research at MIT. A primary focus of my research was to elicit the risks that IA entails to firms. The motivation behind this is to refine everyone’s understanding of this under-studied and under-appreciated topic, that I believe is critically important to the global adoption and success of IA. I also hope that understanding the risks of IA will force us to consider how to address those risks, and reduce the negative effects that automation can have on society.

My study revealed thirty-six IA risks. These risks were categorized into two main groups: socio-organizational and operational. Socio-organisational risks stem from the relationships between different social constructs. This could be, for example between firms and the legal system, or between management and the employees. Operational risks are those that appear during on-going IA operations. They’re narrower in scope compared to socio-organizational risks, and are project-based or technical in nature.

I also looked at what mitigation measures existed to combat the IA risks. These risk mitigation techniques have guided the development of this book and have been implemented into the reusable IA templates, where possible.

Socio-organizational IA risks

Four organisational constructs were identified. They are the environment, the enterprise, third-parties, and employees. Conflicts between these groups led to 22 IA risks.

Diagram

Description automatically generated

Figure Appendix.1 – The four organizational groups and how risks emerge between them

A summary of the 22 risks is in the table below.

Socio-Organisational IA Risks (22)
Environmental Risks (3)Description[NS1] 
ComplianceIA may complicate compliance with existing regulations. Third-party ML vendors must also be compliant with necessary regulations. For example, the GDPR provide Europeans with the right to “meaningful information about the logic involved in automated decisions” and to not “be subject to a decision based solely on automated processing”.
EthicsAlgorithms and data may have bias, leading to unfair outcomes. Models must not use certain sensitive data, such as ethnic origin, political affiliation, religious beliefs or sexual orientation. Care must also be taken in using features correlated to sensitive data, such as geographic region and ethnic group. Another aspect of ethics is the firm’s responsibility to properly educate and prepare employees in case of job nature changes or displacement by IA.
RegulatoryFuture government regulations may affect how and when algorithmic decisions can be used. In the extreme case, new regulations may require completely disabling algorithmic decision-making. To reduce the impact of regulatory risks, IA teams should plan for explainable AI and provide for ways to disable algorithmic decision-making before new regulation is passed.
 
Enterprise Risks (5)Description
Departmental ResistanceManagers may worry that their headcounts or budgets will get frozen or reduced due to IA, leading to their non-cooperation or sabotage of IA efforts.
Employee TurnoverEmployees may quit in large numbers because of IA – for reasons ranging from reduced hours to added stress and ideological opposition. One way to counteract this is to provide organisational support, for example employee training or team-building exercises.
Financial LossIncorrect IA work may lead to financial loss to the firm, due to litigation or erroneous actions being triggered by the automated process.
Information AsymmetryThe power balance between different business units may shift as those in control of IA gain far more processing power and access to information. For example, the data scientists who control valuable decision-making may gain leverage over the business unit they are servicing.
Loss of ControlIf the firm relies on vendors to develop or host the ML predictions, there is a risk that their service will go offline or cease operating. For example, in December 2021, Amazon AWS had three large-scale unplanned outages, halting the services of many companies for several hours during the workday.
 
Employee Risks (8)Description
Cognitive Work OverloadSimple cognitive tasks are automated much more often than difficult ones. This may leave only difficult cognitive work for humans, leading to cognitive overload and increased job stress.
Loss of Job MeaningIA may automate work that is meaningful to employees, leaving them less satisfied with their jobs.
Loss of Job SecurityKnowledge-based automation can lead to a much larger group of employees worrying about their jobs, increasing their stress levels and impairing their health.
Mistrust in ManagementA push towards IA may lead to mistrust being formed between workers and management, especially if IA is not carefully considered and simply mandated from the top.
Mistrust in Model PredictionsA lack of trust in model predictions may lead employees to actively resist or sabotage IA efforts, or to continue to work processes manually even after automation is in place.
Prediction AccountabilityFirms must decide which employee(s) should be held responsible if a prediction or business outcome is incorrect. Disagreement with this decision could lead to mistrust in the organisation.
Reduced Work PreparednessWhen IA is in place, employees spend less time looking at work cases, meaning that fewer details are known about a case if it needs to be manually worked on.
Worker DeskillingDecision-making is a cognitive skill that degrades when not used. Workers’ skill in performing their knowledge tasks may be reduced due to automation. This can lead to a permanent loss of organisational decision-making skill.
 
Third-Party Risks (6)Description
Assignment of LiabilityWhen multiple companies are involved in the development and operation of ML predictions in IA, liability becomes unclear between the firms if something goes wrong.
Attract Competitive ResponsePublicly investing in and promoting the use of IA technologies can trigger responses from competition, either encouraging them to pursue IA themselves or to decry the firm’s use of IA.
Conflicts of InterestAI vendors may be incentivised to “hold the algorithm hostage” and practise rent-seeking behaviour, or to sell the algorithm to competitors.
Missed Servicing OpportunitiesWhen AI is used to interface with customers, it necessarily can only capture topics from the rigid boundaries it has been trained on. There is no flexibility to discover additional ways in which the customer can be serviced.
Performance Agreement BreachesExisting performance or service level agreements may need to be renegotiated after IA is implemented. Most SLAs have measurable numerical targets, for instance minimum response time, minimum completion time or maximum downtime, and can incur financial penalties when targets are missed. IA will likely increase the throughput of work cases completed, lowering the risk of missing some types of SLAs on average. However, it also introduces additional points of failure to the process (the infrastructure to manage and deploy ML) and sources of potential downtime. For example, if ML prediction is cloud hosted, and the cloud platform goes offline, performance could suffer before the outage is discovered and fixed.
Reputation LossIncorrect IA work may lead to reputational loss to the firm.

Table Appendix.1 – 22 socio-organizational IA risks

Operational IA risks

The 14 operational risks were separated into four categories: project, process, ML model and data.

Diagram

Description automatically generated

Figure Appendix.2 – The four operational risk categories

A summary of the operational risks is shown in the table below.

Operational IA Risks (14)[NS2] 
Project Risks (3)Description
Low Predictive PerformanceIA predictions may be less accurate or reliable than human predictions, resulting in worse outcomes after automation.
Low ROIIA may be seen to offer a lower return on investment than RPA, due to the added costs of constant monitoring, retraining of models, and employing data scientists.
Unmeasurable ROIThe ROI of IA may not be measurable, as the value of knowledge or decision work can be difficult to quantify.
 
Process Risks (4)Description
Control Flow DriftsChanging business process logic and pathing in the control flow may necessitate rebuilding ML models.
Difficult Error DetectionThe use of automated ML decisions may make detection of errors in the business process more complicated.
Reduced Understanding of Business LogicKnowledge of particular business processes may reduce over time if business decision-making is automated.
Time Lag EffectsIf incorrect predictions are made, the automated process may continue to perform processing steps. The delay between when a misprediction is made and discovered and the amount of incorrect work that is done as a result are known as “time lag effects”.
 
ML Model Risks (3)Description
Adversarial AttacksML algorithms used in IA are subject to adversarial attacks, which would result in unwanted automated work being processed.
Performance DegradationModel predictive performance is known to reduce over time unless the model is actively managed. Models and technologies can be rendered obsolete by newer, more predictive ML algorithms or the deprecation of libraries used for implementation.
Transfer Learning Bias“Transfer learning” refers to using an existing ML model as a base for the development of an application-specific model. The use of transfer learning represents a breakthrough in ML, allowing models to be built more quickly, and less expensively, especially for image and text processing. However, if transfer learning is used to develop a model, the base model may have hidden biases, with no way to fix them.
 
Data Risks (4)Description
Data BiasModels may perform poorly on real-life data due to biases in the data. Types of data bias include historical bias, which reinforces stereotypes of particular groups; representation bias, where certain populations are underrepresented in collected data; measurement bias, where proxy data is collected (for example, using the number of police arrests as a proxy for crime rate); and aggregation bias, where a catch-all ML model is developed when multiple models should have been used instead.
Data DriftThe underlying nature or distributions of data being used as input to make predictions may change over time, making them no longer representative of the original training data.
Data Privacy & SecuritySending sensitive data to third parties for use or model development may lead to data leaks.
Data QualityLow data quality may lead to poorly performing models. “Quality” is an ill-defined concept, but typically includes the number of data samples, having the proper data structure, a lack of noise or errors in the data, data completeness, and having highly relevant features.

Table Appendix.2 – 14 operational IA risks

IA Risk mitigation measures

Fifteen risk mitigation techniques were uncovered during my research. They’re organised into four categories based on when they can be applied during an IA project. The techniques under Planning and Due Diligence, Algorithm Selection, and Human Interaction Design can be used during the planning and design phases before an IA solution is deployed for use. The techniques listed under the Operations category are used post-implementation, for as long as the IA solution is in production.

Diagram

Description automatically generated

Figure Appendix.3 – 15 risk mitigation measures

A summary of the risk mitigation measures, and where they’ve been discussed or implemented in this book is shown in the table below.

Risk Mitigation Measures (15)[NS3] 
Planning and Due Diligence (3)DescriptionChapter Reference
AI Liability Terms in ContractsEnsure liability in case of incorrect work or predictions in IA is codified into formal contracts.N/A
Contract RenegotiationRenegotiate contracts with other firms when the use of IA fundamentally changes the premise(s) on which they were based.N/A
Understand Employee SentimentUnderstand what employee sentiment is regarding IA, and plan IA projects with these sentiment segmentations in mind.Chapter 10 – IA’s Impact on the Robotic Operating Model
 
Algorithm Selection (2)Description 
Explainable AIChoose algorithms that produce inherently interpretable models, or use methods that can explain predictions after they have been made.Chapter 10 – IA’s Impact on the Robotic Operating Model
Minimise False PositivesDesign or modify the ML algorithm to explicitly minimise false positives as opposed to another measure of accuracy.N/A
 
Human Interaction Design (3)DescriptionChapter Reference
Human in the LoopHave a human monitor, review or audit the work performed by the ML algorithms. This may involve redesigning the process to have some cases routed to humans for manual processing.Chapter 4 – Reviewing Predictions and Human in the Loop
Random SamplingChoose a fixed percentage of work cases that will always be sent to a human for processing instead of being processed automatically.Chapter 4 – Reviewing Predictions and Human in the Loop
ThresholdingDefine thresholds for the ML algorithm such that any prediction that results in a confidence score above the threshold is processed automatically, but anything lower is sent to a human for processing.Chapter 4 – Reviewing Predictions and Human in the Loop
 
Operations (7)DescriptionChapter Reference
Avoid Self-LearningAvoid using any techniques that involve self-learning, preferring to approve any changes or improvements to the underlying models before use in production.N/A
GovernancePut governance and documentation in place to manage aspects of the machine learning lifecycle and to prevent the loss of process knowledge.Chapter 10 – IA’s Impact on the Robotic Operating Model
Monitor DataMonitor and update the training data regularly, and rebuild the related ML models.Chapter 10 – IA’s Impact on the Robotic Operating Model
Monitor ModelsMonitor and update the ML models on a regular basis.Chapter 10 – IA’s Impact on the Robotic Operating Model
Process Runtime ControlsProvide controls that allow automated processes to change between human and ML prediction, as well as validation or no validation, during process execution.Chapter 6 – Reusable IA Components
Self-LearningImprove existing ML predictions automatically through self-learning and automatically deploy them for use in production. Note the contradiction with “Avoid Self-Learning” above; experts differ in their recommendations for human control over model improvement.N/A
Staged DeploymentsUse deployment techniques such as canary testing and A/B testing that allow for fast deployment of model changes and rollbacks if a problem is encountered.Chapter 9 – ML Deployments and Database Operations

Table Appendix.3 – 15 risk mitigation measures and where they are addressed in this book


 [NS1]The descriptions of this table is similar to the content in researchgate.not. Let’s rewrite them.

 [NS2]I have highlighted the content that is similar to researchgate in this table. Please rewrite it.

 [NS3]The highlighted content is similar to researchgate. Please rewrite it.