Name |
Improper AI use or management, leading to risk of inaccurate decisions |
Misinformation or disinformation, impacting corporate reputation or operations |
Description |
- As AI models are extensively adopted in customer service triage, monitoring and analysis, and internal management, CHT is also actively promoting AI applications in its operations. Without a transparent evaluation mechanism, the "black box effect" may be exacerbated, leading to risks such as data misuse and algorithmic bias.
- AI systems rely on large amounts of data and model decisions. Without protection against adversarial attacks, they could face new types of cybersecurity threats like deepfakes and model tampering, leading to service disruptions or inaccurate decisions.
- This risk is multi-faceted, spanning across departments (including technology, legal compliance, and public relations), and cannot be fully managed by traditional risk management, representing an industry-forward, emerging technological risk.
|
- Misinformation and disinformation, amplified by social media algorithms, are prone to creating widespread public opinion turbulence. Such content often blends partial truths with AI-generated imageries, fabricated sources, or decontextualized information, making identification and clarification thereof difficult, potentially posing a complex reputational risk for CHT.
- As a national critical communication platform and digital interface provider, CHT’s role and its responsibility as an information intermediary are widely emphasized by all walks of life and regulators. When misinformation or disinformation involves the company or its services, it may undermine public trust, spark consumer concerns, and negatively impact brand image, business performance, and user retention, leading to operational risks like decreased conversion rates and customer churn.
|
Impact |
- AI adoption in operational processes and AI popularization are promoted. Any implementation postponement due to risk control and governance requirements necessitates readjustment of the budget allocation and strengthening of the transparency assessment, affect the market expansion and profitability.
- Any failure in detection and correction of AI model bias could lead to incorrect decisions and process deviations, trigger infringement and compliance risks, damage brand trust, and force CHT to invest significant resources in solving the problem, increasing operational pressure.
- The concern over the ethical risks of AI from civil societies may prompt strict regulations, which require additional compliance personnel and technology tools, increasing costs and impacting profit.
- As AI systems become complex, the demand for model maintenance, risk monitoring, and technological updates increases, leading to higher costs, and affecting financial stability and investment flexibility.
|
- Direct Operational Losses
(1).Misinformation can lead to wrong decisions, including investment errors, supply chain disruptions, increased customer complaints, and abnormal user behavior changes, and even unfair competition.
(2).To cope with the risk, CHT needs to invest more in public opinion monitoring, crisis response, and reputation management, impacting its OPEX structure and ESG ratings.
(3).The failure to handle the risk could damage CHT’s information credibility and international partner trust.
- Intangible Asset Damage
(1).Fraud or defamation disguised as corporate communication will erode CHT’s reputation and brand value.
(2).Fictitious financial information could cause stock price fluctuations, harming investor confidence and market stability.
- Compliance and Litigation Risks
(1).Failing to clarify misinformation promptly carries the risk of penalties.
(2).Any losses of partners or customers due to false information could lead to legal disputes and compensation liabilities.
|
Mitigating actions |
- Establish CHT’s AI governance framework by referencing the Executive Yuan's Gen-AI guidelines, set internal AI use principles, and obtain ISO 42001 Lead Auditor certificate to strengthen the internal audit and governance mechanisms.
- Organize AI education and interactive experiences, collaborate with industry, government, and academia on large AI advocacy campaigns to boost public understanding and trust in AI application.
- Continuously evaluate the LLM effectiveness in preventing harmful content, quantitatively analyze their reasoning capability and content output risks, and offer developers with a reference for model selection and governance.
- Integrate AI technology applications with international AI management standards to build a "technical protection – architectural resilience – resource allocation" 3-tiered management system and comprehensively enhance AI system risk protection, architectural flexibility, and resource control, implementing a layered governance mechanism.
|
- Implement AI technology to build an automated monitoring framework to identify potential controversial issues and abnormal information dissemination phenomena.
- Integrate multi-source data platforms for real-time social and media opinion capture and semantic analysis to grasp the trend of issue diffusion.
- Utilize machine learning techniques to optimize semantic classification models, enhancing the ability of labeling and grading different types of controversies.
- Integrate natural language processing (NLP) technology to enhance the ability to analyze potential controversy.
- Establish a model refinement mechanism with regular updates of training datasets and algorithm architectures in response to the evolution of misinformation technologies.
- Internal Crisis Response Planning:
(1). Strengthen the coordinated handling capability among strategy, customer service, public relations, and marketing departments.
(2). Establish information-sharing channels with government in line with the "Whole-of-society defense resilience policy."
(3). Monitor high-risk overseas social media platforms to enable early warning and risk prevention.
|