Beyond the obvious

Tech Trends

Skrevet av Hanna Reiestad | Feb 11, 2025 9:19:14 PM

One of the biggest trends in AI is the rise of AI agents

AI Agents

One of the biggest trends in AI is the rise of AI agents — autonomous systems capable of making decisions and completing tasks independently. These agents are increasingly being adopted across industries to improve automation and efficiency, reshaping how businesses operate. An AI agent functions in a continuous loop of observation, processing, and action. It gathers information through provided interfaces and its memory of past interactions, evaluates and prioritizes possible actions, and then executes tasks by accessing enterprise services [2]. Another key shift is industries moving from using individual AI models to multi-agent systems, where multiple AI agents work together to solve complex problems.

But what exactly is an AI agent? Simply put, it is an AI model wrapped in software. AI models themselves are mathematical formulas, primarily in the form of prediction models. Prediction models generate numerical outputs, while language models (which are also prediction models) produce vectors of tokens, each with an associated probability of being correct. Any AI model will need a wrapping of software to be useful. What sets AI agents apart is the ease and speed with which they can be deployed compared to traditional application development.

In the financial sector, AI agents are being developed to process transactions autonomously, eliminating the need for human intervention. Companies like Stripe and Coinbase are at the forefront of this shift, with Stripe launching software development kits (SDKs) designed for AI agents and Coinbase introducing AI-driven crypto wallets, enabling faster and more efficient transactions [1].

For AI agents to achieve widespread adoption, building user trust is crucial. This requires minimizing algorithmic bias and implementing strategies such as explainable AI, which provides transparency into how AI systems make decisions. Cyber security threats are also a major concern. The ease and speed of deployment makes almost anybody able to set up AI agents for almost unlimited purposes. However, this also removes a filter of critical risk assessment inherent in the work of experienced developers. Beyond trust, several challenges hinder adoption, including unclear strategies, limited funding, weak leadership support, and uncertainty about return on investment [3].

Robots and Autonomous Systems

AI-powered physical robots are transforming industries by taking over repetitive physical and digital tasks, analysing sensor data, and making quick decisions. These robots go beyond traditional industrial automation, which has been in use since the 1980s, by continuously learning from their environment, adapting to changing conditions, and taking on increasingly complex tasks. Another major innovation in this field is collaborative robots (cobots) — designed to work alongside human employees, increasing safety and efficiency in the workplace. By integrating with human workflows, cobots can help create more flexible and productive work environments.

However, AI-powered robots also face significant challenges, including integration challenges with legacy systems, regulatory barriers, safety concerns, and excessive costs, all of which slow adoption. Addressing these challenges requires establishing data quality standards and using high-quality training data to make decision-making more reliable. Additionally, better data collection methods are essential to support continuous learning, while clearer regulations at a broader level are necessary to ensure safety and compliance [3].

Cybersecurity

According to a Capgemini survey, 97% of organizations reported security issues related to generative AI in the past year, as attackers increasingly leverage sophisticated techniques such as phishing, ransomware, and fraud schemes [3]. These evolving threats underscore the urgent need for robust security frameworks to defend against AI-driven cyberattacks.

A key trend emerging in cybersecurity is the use of generative AI for real-time threat detection and response, enabling organizations to analyse and interpret vast datasets with greater speed and accuracy [3]. Simultaneously, the rise of AI-generated misinformation has intensified the focus on disinformation security, with advancements in deepfake detection, impersonation prevention, and reputation protection becoming critical in mitigating identity-based threats and digital manipulation [5].

However, several challenges hinder the widespread adoption of AI-driven cybersecurity. Organizational constraints, such as high implementation costs, a shortage of technical expertise, and difficulties integrating AI with legacy systems, remain significant barriers [3]. Additionally, regulatory challenges in tightly controlled industries like healthcare and financial services, coupled with concerns over data privacy, security, algorithmic bias, and fairness, further complicate deployment [3].

Quantum computing

Quantum computing is a computational paradigm that leverages the principles of quantum mechanics to solve problems that are intractable for classical computers, even for super-computers. By exploiting phenomena like superposition and detanglement, quantum computers process information in fundamentally new ways, enabling exponential speedups for specific tasks. It is still in experimental phase, and commercial availability may still be five to ten years ahead. This means that for now, leaders do not really need to spend a lot of time exploring the potential of quantum computing. However, there is reason to stay alert; quantum computers are expected to pose a severe threat to today’s encryption methods, with estimates suggesting that by 2029, most conventional asymmetric cryptographic systems could become obsolete [4]. Since cryptographic methods are fundamental to securing data confidentiality, digital signatures, emails, macros, electronic documents, and user authentication, quantum advancements could undermine the integrity and authenticity of all digital communications. To mitigate this risk, increased investment in post-quantum cryptography is essential, focusing on developing encryption techniques that remain secure against quantum-based attacks [4].

One of the major concerns of the national security authorities is that malicious players will develop and deploy quantum computers to break through cyber security long before society in general has any inclination of the threat. No matter how strong you believe your cyber security fences may be, quantum computers will crush all conventional cryptography in micro-fractions of a second. We are potentially standing in front of the most challenging technological cat-and-mouse game of all times. But not yet. We are still most likely many years away. However, when we think we are almost there, the game is already over, and the mouse is dead.

Spatial Computing

Spatial computing is rapidly emerging as a transformative technology that blends the digital and physical worlds, allowing computers to understand and interact with real-world spaces. A well-known example is augmented reality (AR), where digital images appear on a screen or headset, helping users navigate or complete tasks. Such technologies offer new ways to contextualize business data, engage customers and employees, and interact with digital systems [4].

By leveraging sensors, computer vision, and AI, spatial computing allows users to interact with both physical and virtual objects through an immersive, blended interface. According to Deloitte, the spatial computing market is projected to grow at an annual rate of 18.2% between 2022 and 2033, with the potential to fundamentally reshape how we perceive and interact with digital information [4]. At its core, spatial computing detects and interprets physical elements in the real world, bridges digital and physical inputs through advanced technology, and overlays digital outputs onto a unified interface. This capability has already enabled diverse and transformative applications, with real-time simulations emerging as one of its primary use cases [4].

One of the most promising advancements in spatial computing is its integration with AI agents, a combination that could revolutionize industries such as supply chain management, software development, and financial analysis [4]. AI-driven systems would be capable of anticipating user needs based on historical actions and preferences, ensuring that the right content is delivered, or the right tasks are executed at the optimal time.
Despite its vast potential, spatial computing still faces significant challenges, particularly regarding data quality, interoperability, and system integration [4]. Ensuring that digital representations accurately reflect real-world conditions remains a crucial hurdle that must be overcome to achieve widespread adoption and functionality across industries.

Transparency in LLM development and DeepSeek

The concept of “openness” in AI development is becoming increasingly complex and contested. While many large-scale AI models are marketed as open, their accessibility remains limited due to the dominance of corporate actors who control the data, frameworks, computing power, and funding required for development. The “black box” problem persists, as much of the training data and decision-making processes behind AI models remain undisclosed [6].

Despite claims of transparency, most so-called open AI systems are built on top of closed models, offering limited reusability and little insight into their underlying training data. This lack of transparency hinders validation and reproducibility, raising concerns over data extraction, intellectual property rights, and algorithmic bias. Most LLMs rely on vast datasets scraped from the internet, which include copyrighted materials such as text, images, and code—making it difficult to assess legal and ethical implications [6].

Beyond data, the computational power required for large-scale AI development is another barrier to true openness. The AI industry is heavily dependent on a few dominant corporations, particularly Nvidia, which controls 70–90% of the AI chip market. Most developers rely on CUDA, a framework that only supports Nvidia GPUs, further consolidating power within a handful of companies. Similarly, development frameworks like PyTorch and TensorFlow, created by Meta and Google, shape the ecosystem by setting the norms and tools used by researchers, developers, and students. This corporate control limits independent innovation and reinforces a centralized AI landscape [6].
The launch of DeepSeek, a Chinese-based large language model (LLM), has sparked discussions about a potential paradigm shift in AI development. Positioned as a cost-efficient alternative to established AI models, DeepSeek directly challenges the dominance of Western tech giants. While its reported rapid development still requires independent validation, its benchmark results underscore China’s expanding influence in the global AI landscape [7].

In the future, we can expect to see more of these smaller, fast-moving challengers adopting an open-source approach. Innovations in pricing, functionality, computational efficiency, and environmental sustainability will continue to disrupt and reshape the industry.

One of the most immediate effects of DeepSeek’s introduction was its impact on financial markets, with Nvidia’s stock price experiencing a significant drop. This reaction underscores the market’s sensitivity to AI-driven disruption and demonstrates how emerging players can challenge existing business models [7]. By potentially reducing dependence on high-cost computational resources, models like DeepSeek could shift the AI landscape away from dependence on massive data centres and expensive GPUs. If these models prove viable, they may fundamentally redefine how large language models (LLMs) are developed and deployed, promoting greater accessibility and competition within the AI industry [7].

Is DeepSeek the new OpenAI? Probably not, but that is beside the point. DeepSeek has already made an impact, signalling the rise of new AI challengers worldwide, many of which are emerging outside Silicon Valley. Does the decline in tech stocks indicate the downfall of AI? Hardly. Instead, they reflect a necessary market correction, where overenthusiastic investors are coming to terms with the realities of the real world.

Moving forward, large language models (LLMs) will be valuable as foundational tools for solving real-world problems. Future LLM architectures will prioritize modularity, allowing integration and replacement of different models, reducing dependence on any single provider. Additionally, these architectures will support and encourage the combination of large LLMs with smaller, specialized models tailored for specific tasks. The competitive edge will not be the model itself but the services and infrastructure, and ecosystem built around it.

Data Quality

We often hear that data quality is an essential success factor for AI. But what does that mean?

Good data quality means a state of data that is suitable, representative, and trustworthy for the intended purposes. Key elements are outlined below.

Accuracy

To which extent does your data accurately represent the real-world entities, events, and processes it describes? If you a building authority, and you want to use aerial town photos to train an AI model to automate the application procedure for garages, you need to make sure that the aerial town photos include garages and that you are able to identify which of the buildings in the photo are actually garages and not, say, greenhouses.

Completeness

Are all required data elements present? Are there missing values or incomplete records that could impact analysis or decision-making? One such example is the Dutch SyRI case [8], where the Tax and Customs Administration automated a fraud detection system for social benefits. The fundamental data quality problem was that their data mainly represented urban areas with a high fraction non-western immigrants and low average income. Naturally, their fraud detection algorithm indicated that fraud would happen in low-income families with non-western immigration background. Surprise! The Dutch government resigned over this scandal. Do not become the Dutch government.

Consistency

Are your data uniform and coherent across systems, data sets, and time periods? Or perhaps different teams at the construction site have different procedures for how they report logistics and resource utilisation? If so, your data may not tell you the correct story to support the strategic resource allocation decisions.

Timeliness

Are your data up to date? Or are they sooooo-last-year? Keep in mind the potential consequences of automating decisions based on outdated data… even if regulators let it pass, your customers may very well shred you to pieces.

Validity

Are your data in a format that conforms to your business standard? Do you have the necessary consents from your customers or employees to use the data for the intended purposes?

Uniqueness

Have you filled your databases with duplicates, triplicates, multiplicates? If so, large volumes of data may be completely redundant. Redundant data is waste and a cyber security risk. Unique data, on the other hand, provide useful insights into the variety of entities, events, and processes of your business operations.

Reliability

Are your data credible and not tampered with? If you automate processes based on your data, are you confident that the data tells you the right story?

Relevance

Are the data useful for the specific context of your organisation and business line? If not, you may not even be allowed to keep them.

To summarize, data quality may be compared to cooking; the data is your ingredients, the dish is the purpose of the ingredient processing. If you have the wrong ingredients, in wrong volumes, your tomatoes are rotten, and somebody threw peanuts into the butter, the end result is highly likely to become disastrous. If the customer has peanut allergy and you are the owner of the restaurant, you may even end up with a lawsuit for not keeping your kitchen in order.

Sources

1. CB insights. 2025 Tech Trends.

2. BCG. AI agents. https://www.bcg.com/capabilities/artificial-intelligence/ai-agents 

3. Capgemini. Top tech trends of 2025 – AI powered everything.

4. Deloitte. The new math: Solving cryptography in an age of quantum.

5. Gartner. 2025 Top Strategic Technology Trends.

6. Why ‘open’ AI systems are actually closed, and why this matters. https://www.nature.com/articles/s41586-024-08141-1 

7. Michael Wade. DeepSeek: An Unwelcome Guest to the West’s AI Party. https://www.linkedin.com/pulse/deepseek-unwelcome-guest-wests-ai-party-michael-wade-vtufe/ 

8. The Dutch SyRI case: https://www.humanrightspulse.com/mastercontentblog/dutch-court-finds-syri-algorithm-violates-human-rights-norms-in-landmark-case