Logically AI: Testing and monitoring AI models used to counter online misinformation
Case study from Logically AI.
Background & Description
This case study is focussed on the use of AI to detect online misinformation at scale.
In this case study, we outline our approaches and principles for building trustworthy AI systems for detecting online misinformation. Logically uses a Human in the Loop AI framework called HAMLET (Humans and Machines in the Loop Evaluation and Training) to enable the development of trustworthy and responsible AI technologies.
This framework facilitates machines and experts to work together to design AI systems with greater trustworthiness, including robustness, generalisability, explainability, transparency, fairness, privacy preservation, and accountability. Our approach to trustworthy AI considers the entire lifecycle of AI systems, ranging from data curation to model development, to system development and deployment, and finally to continuous monitoring and governance. HAMLET addresses various data level, and model level challenges in order to develop effective AI solutions for the problems of the online information environment. The framework enables the collection of expert data annotations, expert feedback, AI system performance monitoring and life cycle management.
Data quality management is inherent to be able to address data outliers, anomalies and inconsistencies effectively. Handling data level bias is critical to eliminate noisy patterns and models that serve misleading insights.
AI model performance monitoring and life cycle management are also critical in order to deal with the dynamic nature of the online information environment. AI models are unique software entities as compared to traditional code and their performance can fluctuate over time due to changes in the data input into the model after deployment. Once a model has been deployed, it needs to be monitored to ensure that it performs as expected. Therefore, tools that can test and monitor models to ensure their best performance are required to mitigate regulatory, reputational and operational risks. The main concepts that should be monitored are the following:
- Performance: The ability to evaluate a model’s performance based on a group of metrics and logging its decision or outcome can help give directional insights or comparisons with historical data. These can be used to compare how well different models perform and therefore which one is the best.
- Data Issues and Threats: Modern AI models are increasingly driven by complex feature pipelines and automated workflows that involve dynamic data that undergo various transformations. With so many moving parts, it’s not unusual for data inconsistencies and errors to reduce model performance over time and remain unnoticed. AI models are also susceptible to attacks including ingestion of misleading training data which may look to create blind spots or vulnerabilities.
- Explainability: The black-box nature of the AI models makes them especially difficult to understand and debug, especially in a production environment. Therefore, being able to explain a model’s decision is vital not only for its improvement but also for accountability reasons.
- Bias: Since AI models capture relationships from training data, it’s likely that they propagate or amplify existing data bias or maybe even introduce new bias. Being able to detect and mitigate bias during the development process is difficult but necessary.
- Drift: The statistical properties of the target variable, which the model is trying to predict, change over time in unforeseen ways. This causes problems because the predictions become less accurate as time passes, producing what is known as “concept drift”.
In order to monitor these risks, HAMLET leverages automation and industry best practices around Machine Learning operations (MLops) to design and implement workflows to automatically detect model performance degradation.
The establishment of trustworthiness is a dynamic procedure. The constant enhancement of AI trustworthiness requires a combination of manual and automation-based workflows guided by conceptual frameworks and principles. MLOps provides a starting point to build the workflow for trustworthy AI. By integrating the ML lifecycle, MLOps connects research, experimentation, and product development to enable the rapid leveraging of the theoretical development of trustworthy AI. It contains the following properties which are incorporated within our HAMLET framework:
- Close collaboration between interdisciplinary roles: Building trustworthy AI requires organising different roles, such as ML researchers, software engineers, safety engineers, and legal experts. Close collaboration mitigates the gap in knowledge between forms of expertise.
- Aligned principles of trustworthiness: The risk of untrustworthiness exists in every stage in the lifecycle of an AI system. Mitigating such risks requires that all stakeholders in the AI industry be aware of and aligned with unified trustworthy principles.
- Extensive management of artefacts: An industrial AI system is built upon various artefacts such as data, code, models, configuration, product design, and operation manuals. The elaborate management of these artefacts helps assess risk and increases reproducibility and auditability.
- Continuous feedback loops: Classical continuous integration and continuous development (CI/CD) workflows provide effective mechanisms to improve the software through feedback loops. In a trustworthy AI system, these feedback loops should connect and iteratively improve the five stages of its lifecycle, i.e., data, algorithm, development, deployment, and management.
How this technique applies to the AI White Paper Regulatory Principles
More information on the AI White Paper Regulatory Principles.
Safety, Security & Robustness
Our approach is relevant to the principles of safety, security and robustness as it enables the development of AI technologies adopting best practices in data security management, data level risk and threat management. Moreover, our approach propels the adoption of industry standards for responsible and trustworthy AI. This not only enables safe and responsible development of AI technologies, but also increases their robustness to deal with adversarial attacks.
Appropriate Transparency & Explainability
Our approach is relevant to the principles of transparency and explainability as it enables us to develop AI models and systems that are compliant with industry standards for fairness, accountability, trustworthiness and explainability. This ensures greater transparency, as well as flexibility for usage and collaborative application development.
Fairness
Our approach is relevant to the principles of fairness as it enables us to develop a robust and mature AI technology stack to develop commercial products and services that counter mis/disinformation at scale meeting user satisfaction and trust. We expressly recognise the risk of bias, which informs our processes for collection of data sets and involvement of inter-disciplinary teams, and means our approach actively seeks to prevent the production of discriminatory outcomes.
Why we took this approach
Although AI technologies have been proven to be capable of detecting misinformation at scale, to be truly effective they must be deployed adhering to trustworthy AI principles in real-world applications. However, many current AI systems are found to be vulnerable to bias, user privacy risks and imperceptible attacks. These drawbacks degrade user experience and erode people’s trust in AI systems.
HAMLET enables machines and experts to work together to design AI systems with greater trustworthiness, including robustness, generalisability, explainability, transparency, fairness, privacy preservation, and accountability. This approach to trustworthy AI considers the entire lifecycle of AI systems, ranging from data curation to model development, to system development and deployment, and finally to continuous monitoring and governance.
Benefits to the organisation
-
Enables us to develop AI models and systems that are compliant with industry standards for fairness, accountability, trustworthiness and explainability to allow great transparency as well as flexibility for usage and collaborative application development.
-
Provides the company with a robust and mature AI technology stack to develop commercial products and services that counter mis/disinformation at scale meeting user satisfaction and trust.
Limitations of the approach
- Our understanding of AI trustworthiness is far from complete or universal, and will inevitably evolve as we develop new AI technologies and understand their societal impact more clearly.
- Increased transparency improves trust in AI systems through information disclosure. However, disclosing inappropriate information might increase potential risks. For example, excessive transparency on datasets and algorithms might leak private data and commercial intellectual property. Disclosure of detailed algorithmic mechanisms can also lead to the risk of targeted hacking.
- An inappropriate explanation might also cause users to overly rely on the system and follow the wrong decisions of AI. Therefore, the extent of transparency of an AI system should be specified carefully and differently for the roles of public users, operators, and auditors.
- From an algorithmic perspective, the effects of different objectives of trustworthiness on model performance remain insufficiently understood. Adversarial robustness increases the model’s generalizability and reduces overfitting, but tends to negatively impact its overall accuracy. A similar loss of accuracy occurs in explainable models. Besides this trust–accuracy trade-off, algorithmic friction exists between the dimensions of trustworthiness.
- Despite increasing research interest and efforts, the quantification of many aspects of AI trustworthiness remains elusive. Explainability, transparency, and accountability of AI systems are still seldom evaluated quantitatively, which makes it difficult to compare systems accurately.
Further Links (including relevant standards)
- B. Buruk, P.E. Ekmekci, and B. Arda, “A critical perspective on guidelines for responsible and trustworthy artificial intelligence,” in Medicine, Health Care and Philosophy
- K. A. Crockett, L. Gerber, A. Latham and E. Colyer, “Building Trustworthy AI Solutions: A Case for Practical Solutions for Small Businesses,” in IEEE Transactions on Artificial Intelligence
- D. Kaur, S. Uslu, K.J. Rittichier, A. Durresi, “Trustworthy Artificial Intelligence: A Review,” in ACM Computing Surveys
Further AI Assurance Information
-
For more information about other techniques visit the OECD Catalogue of Tools and Metrics: https://oecd.ai/en/catalogue/overview
-
For more information on relevant standards visit the AI Standards Hub: https://aistandardshub.org/