From AI to AGI: status, choices and timelines

What is AI and how do LLMs work today?

Modern language models (LLM) are transformer-based predictors that calculate the probability of the next token (word/word fragment) given a context. The Transformer architecture replaced recurrent networks by using self-attentionresulting in both better quality and significant training efficiency (Vaswani et al., 2017). arXiv+1

Scaling laws has steered the progress: DeepMind's "Chinchilla" showed that for compute-optimal training should model size and number of training tokens are scaled approximately the same; many models were previously "undertrained" relative to their size. arXiv+1

In practice, 2025 models combine:

  • Multimodality (text + image/audio/video) with long context windows (hundreds of thousands of tokens in some systems).
  • Efficient serving: e.g. vLLM/PagedAttentionwhich reduces memory loss in KV cache and increases throughput 2-4× in production. arXiv+1

Limitations consists of: lack of lasting memory between sessions, uncertainty management, causal understanding of the world and robust planning outside of training allocation.

What do we mean by AGI?

Artificial General Intelligence (AGI) is used for systems that can learn, reason and act broadly across domains-approaching human level or better. The very idea of "machines that can outperform man" is old: Turing discussed the test for machine "thinking" (1950), and I. J. Good outlined "the first ultraintelligent machine" who can improve themselves (1965/66). courses.cs.umbc.edu+2incompleteideas.net+2

The term AGI in the modern sense was popularized in the early 2000s (Goertzel, Legg et al.), although some point to use dating back to the 1990s; the consensus is that AGI was established as a terminology around 2002-2007 and consolidated through its own conferences and technical texts. Wikipedia+1

The difference from today's LLMs is not just about size: AGI requires architectures that connect language/multimodal understanding with persistent memory, tool use, sensory and action loops, and goal-oriented learning and safe self-improvement - under management mechanisms that can be documented.

How can AI take the step towards AGI (technical building blocks)

  1. Memory + knowledge retrieval: from in-context learning to true long-term memory (episodic/semantic) with evaluated updating regimes.
  2. Multimodal world models: integrating senses (image/sound/video/biomedical sensors), action and causal prediction.
  3. Agent architecture: planning, tool calls, collaboration between sub-agents and explicit uncertainty management.
  4. Security and governance: WHO has >40 recommendations for large multimodal models in health; EU AI Act introduces risk-based duties, sandboxes and bans on certain practices. This makes governance as important as compute. Reuters+4World Health Organization+4World Health Organization+4

When can we expect AGI (timelines with uncertainty)

Major surveys of AI researchers (AI Impacts, 2023-2024) estimate approximately 50 % probability of HLMI/AGI around 2047, with a very large spread; 10 % already in 2027 and many who believe significantly later. The results show both accelerated expectations from 2022 to 2023 and high disagreement. AI Impacts+2arXiv+2
Popular science overviews and media coverage point to the same picture of uncertainty and divergent scientific camps. Our World in Data+1

Editorial review: Given the current method front (transform + scale + work with memory/agentics) 2030s as the likely period for part-general systems in delimited domains, while 2040s is a cautious middle ground for more broadly applicable, regulated "AGI-like" systems-conditional on security, regulation and infrastructure in place.

Where does the front stand internationally?

  • Architecture and operations: transformer derivatives, long contexts, multimodality; serving with vLLM/PagedAttention and hardware optimization (e.g. FlashAttention-3) that make production economically feasible. arXiv+1
  • Secure AI: directions such as "constitutional AI" and frontier risk-programs at leading labs; regulatory implementation in the EU is moving towards sandboxes and sector rules. Artificial Intelligence Act+1

What's happening in Norway (and near Norway) right now?

MediVox (Norway) - local models for clinical documentation

What they do: MediVox delivers AI-supported transcription and journal drafting for healthcare professionals with local language models operated in Sandefjord (stated by the company in previous communication with the editorial team and in public presentations). The solution addresses Norwegian clinical practice, documentation and compliance (GDPR/ISO).
Who's behind it: Co-founded and led by Norwegian developers and clinicians; Håkon Berntsen is COO and co-founder.

Notice: For clinical claims and use cases, please refer to public notices/contracts; health-related functionality should always be evaluated against WHO recommendations and the EU AI Act when use is "high risk". World Health Organization+1

EIR Tec Ltd (UK) - EEG-based brain reading and home diagnostics

What they do: EIR Tec describes a platform for home-based EEG and AI-supported analysis aimed at mental health/neuro (e.g. ADHD/ADD-related assessments). Official website: eirtech.co.uk. EIR TEC
Who's behind it: Technology environment with Norwegian founding ties; Håkon Berntsen is CTO/co-founder (disclosed to the editors).
Editorial clarification: Diagnostics in a legal/medical sense requires regulatory approvals (class, indication, market). EIR Tec communicates ambitions for diagnostic support; readers should distinguish between clinical support tool and formal diagnosis.

ReadySOFT / ReadyPOS (Norway) - AI-powered POS for SMBs

What they do: ReadySOFT is building a .NET MAUI-based POS system (Android/Sunmi) with AI modules for price optimization, menu/range, campaign automation and predictive operations. Technical material and business pitch informs about local LLM infrastructure in ReadySOFT Cloud Sandefjord, Zero Data Retention using external LLM APIs, and integrations with Tripletex, PowerOffice and Fiken.
Who's behind it: The team includes Per Joar Lorvik (CEO), Øyvind Horn (CFO), Helge Andresen (CTO/Advisor), and Håkon Berntsen (AI/architecture, advisor)(Source: investor/product documents submitted to the editors).


In short (editorial)

  • AI today: transformer models (LLM/LMM) that are extremely capable, but with clear limits-special memory, robust planning and causal understanding. arXiv
  • AGI as a goal: requires systems with persistent memory, multimodal sensory/action loop, agentic planning and proven security. Historically rooted from Turing (1950) via Good (1965) to the AGI concept (2000s). courses.cs.umbc.edu+2incompleteideas.net+2
  • Timeline: the academic community's median points to around 2047 for HLMI/AGI, but the gap is large; several milestones can come significantly earlier. AI Impacts
  • Norway nowThree different tracks-clinical documentation on local models (MediVox), brain signal analysis at home (EIR Tec) and AI-powered POS on Norwegian infrastructure (ReadySOFT)-show a practical path from today's AI towards more general, secure and regulated intelligence.

Sources (selection)

Del:
en_USEnglish