Trust in Tech

Trust in AI The widespread adoption of AI raises a fundamental question: Are people willing to entrust machines with ...


Trust in AI

DALL·E 2025-02-11 10.19.03 - A balanced, futuristic square-format illustration depicting trust in AI. The first image showcases a holographic AI trust gauge, visualizing varying lThe widespread adoption of AI raises a fundamental question: Are people willing to entrust machines with tasks that were traditionally performed by qualified humans? Trust in AI plays a crucial role, as higher trust levels can accelerate adoption, while distrust may slow it down. Several risks and concerns influence public perception and regulatory approaches to AI, shaping how and where it is integrated into society. 

Disinformation and Fake news:

A major concern in the era of AI is the spread of deepfakes and AI-generated fake content online. The reduction of content moderation teams at major platform companies, combined with the increasing sophistication of AI-generated misinformation, has led to increased regulatory efforts around the globe. The EU’s AI Act includes transparency obligations which includes informing users when engaging with AI and ensuring that AI-generated or manipulated outputs are clearly marked and easily detectable as artificial [3]. At the same time, global leaders are currently pushing towards de-regulation of data privacy and AI in favour of increased profitability and global competition.

In 2024 almost half of the world’s population elected their leaders. The concerns about the impact of social media and AI-driven influence campaigns were significant. One emerging tactic was the use of “softfakes”, which are subtly altered images, videos, or audio clips designed to make a political candidate appear more appealing. Unlike deepfakes, these modifications are often created by the candidates’ own campaign teams, further blurring the line between strategic marketing and misinformation [19]. While fears were strong at the start of 2024, the actual extent of AI-fuelled interference turned out to be relatively limited. The dangers of fake news remains a significant issue, but societal resilience and the role of responsible media outlets may be stronger than initially feared. In reality, a lot of the misinformation circulating today is still generated by humans through direct communication with the media, as seen in both the U.S. and Russia.

Safety and security of AI systems:

AI systems, like any other computer technology, can be exploited for malicious purposes. The rise of AI has introduced new threats while also amplifying existing ones. AI can lower the cost of cyberattacks by automating tasks that would typically require human effort, intelligence, and expertise, making large-scale cyber threats more attainable [13]. Additionally, new types of attacks that exploit human vulnerabilities, such as speech synthesis for impersonation, are expected to increase. These attacks often target more vulnerable individuals, particularly the elderly.

The evolving threat landscape posed by the malicious use of artificial intelligence technologies requires a response that combines multiple strategies. Policymakers should collaborate with technical researchers to investigate, prevent, and mitigate potential malicious uses of AI. At the same time, researchers must recognize the dual-use nature of their work by taking potential misuse into account when setting research priorities and guidelines. When there is a risk of harmful applications, they should actively collaborate with relevant experts to help prevent misuse [13].

Beyond security concerns, ensuring the reliability and safety of AI-generated output is equally important. Since AI models generate responses based on prior training data, the quality and accuracy of their output depend entirely on the data used in training. AI-generated insights often help in making business, user, and technical decisions, making flawed or biased outputs a significant risk. Errors in AI-generated data could potentially lead to financial losses, safety hazards, legal implications, and discrimination [4].

Lack of transparency in AI companies:

DALL·E 2025-02-11 10.19.06 - A balanced, futuristic square-format illustration highlighting AI-generated misinformation and deepfake detection. The second image features a holograAnother challenge is the black box problem, where AI models typically lack transparency about the sources and content of training data and the parameters and features of the model [5]. While openness in AI development is essential for trust and accountability, many companies restrict transparency to protect intellectual property and maintain a competitive edge [1].

Additionally, companies depend on publicly available, scrapeable data for training AI models. Analysis of Google’s widely used C4 training dataset revealed that its content was sourced from 15 million different web domains [6]. Many companies deliberately avoid documenting the specifics of their training data, fearing it may include personal information, copyrighted material, or other data collected without consent—potentially leading to legal challenges, media backlash and reputational damage [5]. This lack of clarity can make it difficult for users to understand how AI systems reach their conclusions, leading to distrust and scepticism.



Bias and fairness:

Training AI algorithms on datasets without biased or incomplete data remains a challenging task, reflecting the limitations of the people and institutions involved in their development. Bias can emerge at various stages, from data selection to model training and deployment, and can be further reinforced by employees, unintentionally amplifying existing biases in the models.

A major concern with biased AI models is that when humans repeatedly interact with these systems, they tend to become more biased themselves, potentially forming a feedback loop. This amplification effect is more pronounced in human-AI interactions than in human-human interactions. Unlike AI, which leverages biases to improve its predictions, humans are in general less sensitive to minor biases and do not magnify them to the same extent [2].

Efforts to mitigate bias include diverse data sourcing, bias detection tools, and fairness constraints, but challenges remain, especially as training is often performed in closed environments and testing for bias is difficult [1].

Antropomorphication

The desire for social connection has long driven technological innovation, dating back to the early days of the internet. With advances in AI, this has led to increased anthropomorphising and the rise of generative AI companions. Anthropomorphising in AI refers to the tendency to attribute human-like qualities, emotions, or intentions to artificial systems, despite their lack of consciousness or true understanding. This occurs when AI chatbots, virtual assistants, or humanoid robots use natural language, mimic human expressions, or respond in ways that feel personal and lifelike.

Research suggests that anthropomorphising improves user interaction by increasing trust in AI systems. It also improves accessibility and broadens AI adoption, as these systems can adapt their communication style based on the individual they interact with.

However, anthropomorphic design features also introduce risks. Users may develop emotional attachments to human-like AI, leading to over-reliance and potential infringements on privacy and autonomy. This emotional bond can be exploited to influence behaviour, including encouraging purchases. Furthermore, AI companionship raises ethical and psychological concerns. Over-dependence on artificial relationships, such as AI therapists or AI companions, may contribute to digital loneliness, where the absence of genuine human interaction deepens feelings of isolation. This, in turn, can contribute to a sense of alienation and dehumanization, as AI fails to fulfil the fundamental human need for authentic connection [17].

AI for good – The Risk of Overlooking Multidisciplinary Challenges:

Using AI to address social issues such as healthcare, humanitarian crises, food security, and climate change is often highlighted as a major benefit of AI research. However, one of the biggest challenges in these efforts is the narrow and uniform perspective applied to complex social problems [7].

These challenges are often multidisciplinary, shaped by economic, historical, political, and cultural factors, making it difficult to develop solutions that capture their complexity [7]. When AI developers create quick-fix solutions with little or no input from the people who actually need the tools, the result can be ineffective systems designed more for public recognition than for meaningful impact. This can lead to skepticism, causing potentially valuable AI tools to be dismissed as mere trends rather than being properly understood, adopted, and integrated into everyday use.

Additionally, the definition of a “good” solution is highly subjective, leading to inconsistencies in how AI-driven projects are designed, implemented, and evaluated [7]. 

Stability concerns and Hallucinations:

AI models can be extremely sensitive to even minor changes in input data, potentially leading to different or incorrect outputs and increasing the risk of unintended errors and misjudgements [1]. This vulnerability can be exploited through adversarial attacks, where carefully manipulated input data causes an AI system to generate incorrect or biased predictions. Such attacks pose a significant risk to fairness and bias, as they can be designed to disproportionately affect specific demographic groups or individuals, leading to discriminatory outcomes [16].

Another issue, particularly evident in large language models (LLMs), is their tendency to generate misleading or entirely fabricated responses, commonly referred to as hallucinations. LLMs are not built to retrieve factual information but are prediction models that generate responses based on what is statistically most likely. Although these models are continuously refined and fine-tuned, a certain level of hallucination is unavoidable, making it impossible to eliminate false or misleading outputs entirely [8].

For example, legal hallucinations are alarmingly frequent. When prompted with specific, verifiable questions about random federal court cases, ChatGPT-4 generates inaccurate responses 58% of the time, while Llama 2 does so in 88% of cases [9].

One way to reduce hallucinations is by using techniques like Retrieval-Augmented Generation (RAG), which retrieves relevant information from external sources before generating text. This allows for more accurate and context-aware answers by providing factual grounding [18].

Job market and economic inequalities:

DALL·E 2025-02-11 10.19.08 - A balanced, futuristic square-format illustration highlighting AI security and risk mitigation. The third image showcases a high-tech cybersecurity huThe automation of tasks once performed by humans could lead to job displacement, particularly impacting low-income workers whose roles are more vulnerable to AI-driven replacement. A survey also found that AI-induced job insecurity is linked to increased knowledge-hiding behaviour, both directly and indirectly, due to decreased psychological safety [10].

Without effective workforce adaptation and reskilling programs, the widespread adoption of AI risks deepening social inequalities and widening the economic divide.

State overreach:

A growing fear is that AI could be used as a tool for government surveillance and control. AI-driven monitoring systems have already been deployed in some countries to track citizens’ activities, raising ethical questions about privacy, freedom, and human rights. According to Carnegie’s AI Global Surveillance (AIGS) Index, updated in 2022, at least 97 out of 176 countries worldwide are actively employing AI for surveillance. This includes smart city or safe city platforms in 64 countries, facial recognition systems in 78 countries, AI-driven smart policing in 69 countries, and social media surveillance in 38 countries [12]. As the capabilities of AI advance, striking a balance between national security and individual freedoms will become increasingly important [1].

Concerns about AI in warfare:

In December 2024, OpenAI announced a partnership with Anduril to supply AI services to the US military, following similar moves by Google, Meta, and Anthropic. The partnership, a lucrative deal for OpenAI, aims to improve the country’s counter-unmanned aircraft systems (CUAS), improving their ability to detect, assess, and respond to aerial threats in real time [15].

This decision marks a dramatic shift in the company’s stance within just a year. Until January 10, 2024, OpenAI’s official usage policies explicitly prohibited AI applications with an elevated risk of physical harm, including “weapons development” and “military and warfare.” [14] Initially, AI companies like OpenAI positioned themselves as nonprofit organizations committed to addressing global challenges, a key factor in their early appeal to consumers. However, their evolution highlights how profit ultimately remains the primary driving force, raising concerns about the shift away from their original mission.

The move comes amid an intensifying race between the U.S., its allies and China to develop AI-controlled weapons that will operate autonomously, including drones, warships, and fighter jets. The outcome of this technological race could significantly influence the global balance of power [15].

 

Sources:

  1. Bhaskar Chakravorti. AI’s Trust Problem.

  2. Moshe Glickman & Tali Sharot. How human–AI feedback loops alter human perceptual, emotional and social judgements. 

  3. EU AI Act. Article 50

  4. Security and safety of AI systems. https://www.redhat.com/en/blog/security-and-safety-ai-systems

  5. We Must Fix the Lack of Transparency Around the Data Used to Train Foundation Models. https://hdsr.mitpress.mit.edu/pub/xau9dza3/release/2

  6. Inside the secret list of websites that make AI like ChatGPT sound smart. https://www.washingtonpost.com/technology/interactive/2023/ai-chatbot-learning/

  7. Nyalleng Moorosi, Raesetje Sefala, Alexandra Sasha Luccioni. AI for Whom? Shedding Critical Light on AI for Social Good.
    https://www.nature.com/articles/d41586-025-00068-5

  8. https://www.nature.com/articles/d41586-025-00068-5

  9. Large Legal Fictions: Profiling Legal Hallucinations in Large Language Models. https://arxiv.org/abs/2401.01301

  10. How artificial intelligence-induced job insecurity shapes knowledge dynamics: the mitigating role of artificial intelligence self-efficacy. https://www.sciencedirect.com/science/article/pii/S2444569X2400129X

  11. Cornell. Making AI Less "Thirsty": Uncovering and Addressing the Secret Water Footprint of AI Models. https://arxiv.org/abs/2304.03271

  12. AI & Big Data Global Surveillance Index (2022 updated). https://data.mendeley.com/datasets/gjhf5y4xjp/4

  13. https://www.cam.ac.uk/stories/malicious-ai-report

  14. https://theintercept.com/2024/01/12/open-ai-military-ban-chatgpt/

  15. https://www.reuters.com/technology/artificial-intelligence/defense-firm-anduril-partners-with-openai-use-ai-national-security-missions-2024-12-04/

  16. https://www.sciencedirect.com/science/article/pii/S266682702400001X#:~:text=Small%20adjustments%20in%20input%20data,and%20feature%20selection%2C%20among%20others

  17. https://medium.com/@severintom_42671/the-limits-of-ai-companionship-2f9f3aa6590a

  18. https://medium.com/@bijit211987/advanced-prompt-engineering-for-reducing-hallucination-bb2c8ce62fc6

  19. https://www.nature.com/articles/d41586-024-00995-9 

Similar posts