{"id":1633,"date":"2025-10-16T02:16:42","date_gmt":"2025-10-16T00:16:42","guid":{"rendered":"https:\/\/nettsak.no\/?p=1633"},"modified":"2025-10-16T02:18:35","modified_gmt":"2025-10-16T00:18:35","slug":"from-ai-to-agi-status-roadmap-and-timelines","status":"publish","type":"post","link":"https:\/\/nettsak.no\/en\/from-ai-to-agi-status-roadmap-and-timelines\/","title":{"rendered":"From AI to AGI: status, choices and timelines"},"content":{"rendered":"<h2 class=\"wp-block-heading\">What is AI and how do LLMs work today?<\/h2>\n\n\n\n<p>Modern language models (LLM) are <strong>transformer-based<\/strong> predictors that calculate the probability of the next token (word\/word fragment) given a context. The Transformer architecture replaced recurrent networks by using <strong>self-attention<\/strong>resulting in both better quality and significant training efficiency (Vaswani et al., 2017). <a href=\"https:\/\/arxiv.org\/abs\/1706.03762?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noreferrer noopener\">arXiv+1<\/a><\/p>\n\n\n\n<p><strong>Scaling laws<\/strong> has steered the progress: DeepMind's \"Chinchilla\" showed that for <em>compute-optimal<\/em> training should <strong>model size and number of training tokens are scaled approximately the same<\/strong>; many models were previously \"undertrained\" relative to their size. <a href=\"https:\/\/arxiv.org\/abs\/2203.15556?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noreferrer noopener\">arXiv+1<\/a><\/p>\n\n\n\n<p>In practice, 2025 models combine:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Multimodality<\/strong> (text + image\/audio\/video) with long context windows (hundreds of thousands of tokens in some systems).<\/li>\n\n\n\n<li><strong>Efficient serving<\/strong>: e.g. <strong>vLLM\/PagedAttention<\/strong>which reduces memory loss in KV cache and increases throughput 2-4\u00d7 in production. <a href=\"https:\/\/arxiv.org\/abs\/2309.06180?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noreferrer noopener\">arXiv+1<\/a><\/li>\n<\/ul>\n\n\n\n<p><strong>Limitations<\/strong> consists of: lack of lasting memory between sessions, uncertainty management, causal understanding of the world and robust planning outside of training allocation.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">What do we mean by AGI?<\/h2>\n\n\n\n<p><strong>Artificial General Intelligence (AGI)<\/strong> is used for systems that can learn, reason and act broadly across domains-approaching human level or better. The very idea of \"machines that can outperform man\" is old: Turing discussed the test for machine \"thinking\" (1950), and I. J. Good outlined <strong>\"the first ultraintelligent machine\"<\/strong> who can improve themselves (1965\/66). <a href=\"https:\/\/courses.cs.umbc.edu\/471\/papers\/turing.pdf?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noreferrer noopener\">courses.cs.umbc.edu+2incompleteideas.net+2<\/a><\/p>\n\n\n\n<p>The term <strong>AGI<\/strong> in the modern sense was popularized in the early 2000s (Goertzel, Legg et al.), although some point to use dating back to the 1990s; the consensus is that <em>AGI<\/em> was established as a terminology around 2002-2007 and consolidated through its own conferences and technical texts. <a href=\"https:\/\/en.wikipedia.org\/wiki\/Artificial_general_intelligence?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noreferrer noopener\">Wikipedia+1<\/a><\/p>\n\n\n\n<p><strong>The difference from today's LLMs<\/strong> is not just about size: AGI requires architectures that connect language\/multimodal understanding with <strong>persistent memory, tool use, sensory and action loops<\/strong>, and <strong>goal-oriented learning<\/strong> and safe self-improvement - under management mechanisms that can be documented.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">How can AI take the step towards AGI (technical building blocks)<\/h2>\n\n\n\n<ol class=\"wp-block-list\">\n<li><strong>Memory + knowledge retrieval<\/strong>: from in-context learning to true long-term memory (episodic\/semantic) with evaluated updating regimes.<\/li>\n\n\n\n<li><strong>Multimodal world models<\/strong>: integrating senses (image\/sound\/video\/biomedical sensors), action and causal prediction.<\/li>\n\n\n\n<li><strong>Agent architecture<\/strong>: planning, tool calls, collaboration between sub-agents and explicit uncertainty management.<\/li>\n\n\n\n<li><strong>Security and governance<\/strong>: WHO has &gt;40 recommendations for large multimodal models in health; <strong>EU AI Act<\/strong> introduces risk-based duties, sandboxes and bans on certain practices. This makes <em>governance<\/em> as important as compute. <a href=\"https:\/\/www.who.int\/news\/item\/18-01-2024-who-releases-ai-ethics-and-governance-guidance-for-large-multi-modal-models?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noreferrer noopener\">Reuters+4World Health Organization+4World Health Organization+4<\/a><\/li>\n<\/ol>\n\n\n\n<h2 class=\"wp-block-heading\">When can we expect AGI (timelines with uncertainty)<\/h2>\n\n\n\n<p>Major surveys of AI researchers (AI Impacts, 2023-2024) estimate approximately <strong>50 % probability of HLMI\/AGI around 2047<\/strong>, with a very large spread; 10 % already in 2027 and many who believe significantly later. The results show both accelerated expectations from 2022 to 2023 and high disagreement. <a href=\"https:\/\/aiimpacts.org\/wp-content\/uploads\/2023\/04\/Thousands_of_AI_authors_on_the_future_of_AI.pdf?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noreferrer noopener\">AI Impacts+2arXiv+2<\/a><br>Popular science overviews and media coverage point to the same picture of uncertainty and divergent scientific camps. <a href=\"https:\/\/ourworldindata.org\/ai-timelines?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noreferrer noopener\">Our World in Data+1<\/a><\/p>\n\n\n\n<p><strong>Editorial review:<\/strong> Given the current method front (transform + scale + work with memory\/agentics) <strong>2030s<\/strong> as the likely period for <em>part-general<\/em> systems in delimited domains, while <strong>2040s<\/strong> is a cautious middle ground for more broadly applicable, regulated \"AGI-like\" systems-conditional on security, regulation and infrastructure in place.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Where does the front stand internationally?<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Architecture and operations<\/strong>: transformer derivatives, long contexts, multimodality; serving with <strong>vLLM\/PagedAttention<\/strong> and hardware optimization (e.g. FlashAttention-3) that make production economically feasible. <a href=\"https:\/\/arxiv.org\/abs\/2309.06180?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noreferrer noopener\">arXiv+1<\/a><\/li>\n\n\n\n<li><strong>Secure AI<\/strong>: directions such as \"constitutional AI\" and <em>frontier risk<\/em>-programs at leading labs; regulatory implementation in the EU is moving towards sandboxes and sector rules. <a href=\"https:\/\/artificialintelligenceact.eu\/?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noreferrer noopener\">Artificial Intelligence Act+1<\/a><\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h1 class=\"wp-block-heading\">What's happening in Norway (and near Norway) right now?<\/h1>\n\n\n\n<h2 class=\"wp-block-heading\">MediVox (Norway) - local models for clinical documentation<\/h2>\n\n\n\n<p><strong>What they do:<\/strong> MediVox delivers AI-supported transcription and journal drafting for healthcare professionals with <strong>local language models operated in Sandefjord<\/strong> (stated by the company in previous communication with the editorial team and in public presentations). The solution addresses Norwegian clinical practice, documentation and compliance (GDPR\/ISO).<br><strong>Who's behind it:<\/strong> Co-founded and led by Norwegian developers and clinicians; H\u00e5kon Berntsen is COO and co-founder.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>Notice: For clinical claims and use cases, please refer to public notices\/contracts; health-related functionality should always be evaluated against WHO recommendations and the EU AI Act when use is \"high risk\". <a href=\"https:\/\/www.who.int\/news\/item\/18-01-2024-who-releases-ai-ethics-and-governance-guidance-for-large-multi-modal-models?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noreferrer noopener\">World Health Organization+1<\/a><\/p>\n<\/blockquote>\n\n\n\n<h2 class=\"wp-block-heading\">EIR Tec Ltd (UK) - EEG-based brain reading and home diagnostics<\/h2>\n\n\n\n<p><strong>What they do:<\/strong> EIR Tec describes a platform for <strong>home-based EEG<\/strong> and AI-supported analysis aimed at mental health\/neuro (e.g. ADHD\/ADD-related assessments). Official website: eirtech.co.uk. <a href=\"https:\/\/eirtech.co.uk\/\" target=\"_blank\" rel=\"noreferrer noopener\">EIR TEC<\/a><br><strong>Who's behind it:<\/strong> Technology environment with Norwegian founding ties; H\u00e5kon Berntsen is CTO\/co-founder (disclosed to the editors).<br><strong>Editorial clarification:<\/strong> <strong>Diagnostics<\/strong> in a legal\/medical sense requires regulatory approvals (class, indication, market). EIR Tec communicates ambitions for diagnostic support; readers should distinguish between <em>clinical support tool<\/em> and <em>formal diagnosis<\/em>.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">ReadySOFT \/ ReadyPOS (Norway) - AI-powered POS for SMBs<\/h2>\n\n\n\n<p><strong>What they do:<\/strong> ReadySOFT is building a .NET MAUI-based POS system (Android\/Sunmi) with <strong>AI modules for price optimization, menu\/range, campaign automation and predictive operations<\/strong>. Technical material and business pitch informs about local LLM infrastructure in <strong>ReadySOFT Cloud Sandefjord<\/strong>, <strong>Zero Data Retention<\/strong> using external LLM APIs, and integrations with Tripletex, PowerOffice and Fiken.<br><strong>Who's behind it:<\/strong> The team includes <strong>Per Joar Lorvik (CEO)<\/strong>, <strong>\u00d8yvind Horn (CFO)<\/strong>, <strong>Helge Andresen (CTO\/Advisor)<\/strong>, and <strong>H\u00e5kon Berntsen (AI\/architecture, advisor)<\/strong>(Source: investor\/product documents submitted to the editors).<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">In short (editorial)<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>AI today<\/strong>: transformer models (LLM\/LMM) that are extremely capable, but with clear limits-special memory, robust planning and causal understanding. <a href=\"https:\/\/arxiv.org\/abs\/1706.03762?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noreferrer noopener\">arXiv<\/a><\/li>\n\n\n\n<li><strong>AGI as a goal<\/strong>: requires systems with persistent memory, multimodal sensory\/action loop, agentic planning and proven security. Historically rooted from Turing (1950) via Good (1965) to the AGI concept (2000s). <a href=\"https:\/\/courses.cs.umbc.edu\/471\/papers\/turing.pdf?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noreferrer noopener\">courses.cs.umbc.edu+2incompleteideas.net+2<\/a><\/li>\n\n\n\n<li><strong>Timeline<\/strong>: the academic community's median points to around <strong>2047<\/strong> for HLMI\/AGI, but the gap is large; several milestones can come significantly earlier. <a href=\"https:\/\/aiimpacts.org\/wp-content\/uploads\/2023\/04\/Thousands_of_AI_authors_on_the_future_of_AI.pdf?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noreferrer noopener\">AI Impacts<\/a><\/li>\n\n\n\n<li><strong>Norway now<\/strong>Three different tracks-clinical documentation on local models (MediVox), brain signal analysis at home (EIR Tec) and AI-powered POS on Norwegian infrastructure (ReadySOFT)-show a <em>practical<\/em> path from today's AI towards more general, secure and regulated intelligence.<\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">Sources (selection)<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Turing, A. M. \"Computing Machinery and Intelligence\" (1950). <a href=\"https:\/\/courses.cs.umbc.edu\/471\/papers\/turing.pdf?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noreferrer noopener\">courses.cs.umbc.edu<\/a><\/li>\n\n\n\n<li>Good, I. J. \"Speculations Concerning the First Ultraintelligent Machine\" (1965\/66). <a href=\"https:\/\/incompleteideas.net\/papers\/Good65ultraintelligent.pdf?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noreferrer noopener\">incompleteideas.net+1<\/a><\/li>\n\n\n\n<li>Vaswani et al. \"Attention Is All You Need\" (2017). <a href=\"https:\/\/arxiv.org\/abs\/1706.03762?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noreferrer noopener\">arXiv<\/a><\/li>\n\n\n\n<li>Hoffmann et al. \"Training Compute-Optimal Large Language Models\" (Chinchilla, 2022). <a href=\"https:\/\/arxiv.org\/abs\/2203.15556?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noreferrer noopener\">arXiv+1<\/a><\/li>\n\n\n\n<li>vLLM\/PagedAttention (2023). <a href=\"https:\/\/arxiv.org\/abs\/2309.06180?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noreferrer noopener\">arXiv+1<\/a><\/li>\n\n\n\n<li>WHO, <em>AI ethics and governance guidance for LMMs<\/em> (2024-25). <a href=\"https:\/\/www.who.int\/news\/item\/18-01-2024-who-releases-ai-ethics-and-governance-guidance-for-large-multi-modal-models?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noreferrer noopener\">World Health Organization+1<\/a><\/li>\n\n\n\n<li>EU AI Act - overview and entry into force. <a href=\"https:\/\/www.europarl.europa.eu\/topics\/en\/article\/20230601STO93804\/eu-ai-act-first-regulation-on-artificial-intelligence?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noreferrer noopener\">European Parliament+1<\/a><\/li>\n\n\n\n<li>AGI concept and history. <a href=\"https:\/\/en.wikipedia.org\/wiki\/Artificial_general_intelligence?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noreferrer noopener\">Wikipedia<\/a><\/li>\n\n\n\n<li>Timelines\/HLMI: AI Impacts 2023\/2024. <a href=\"https:\/\/aiimpacts.org\/wp-content\/uploads\/2023\/04\/Thousands_of_AI_authors_on_the_future_of_AI.pdf?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noreferrer noopener\">AI Impacts+2arXiv+2<\/a><\/li>\n\n\n\n<li>EIR Tec (official website). <a href=\"https:\/\/eirtech.co.uk\/\" target=\"_blank\" rel=\"noreferrer noopener\">EIR TEC<\/a><\/li>\n\n\n\n<li>ReadySOFT documents (submitted).<\/li>\n<\/ul>\n\n\n\n<p><\/p>","protected":false},"excerpt":{"rendered":"<p>Hva er AI\u2014og hvordan fungerer LLM-er i dag? Moderne spr\u00e5kmodeller (LLM) er transformer-baserte prediktorer som beregner sannsynligheten for neste token (ord\/ordfragment) gitt en kontekst. Transformer-arkitekturen erstattet rekurrente nettverk ved \u00e5 bruke self-attention, noe som ga b\u00e5de bedre kvalitet og kraftig oppl\u00e6ringseffektivitet (Vaswani mfl., 2017). arXiv+1 Skaleringslover har styrt fremgangen: DeepMinds \u201cChinchilla\u201d viste at for compute-optimal [&hellip;]<\/p>","protected":false},"author":1,"featured_media":1634,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[42],"tags":[],"class_list":["post-1633","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-teknologi"],"_links":{"self":[{"href":"https:\/\/nettsak.no\/en\/wp-json\/wp\/v2\/posts\/1633","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/nettsak.no\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/nettsak.no\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/nettsak.no\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/nettsak.no\/en\/wp-json\/wp\/v2\/comments?post=1633"}],"version-history":[{"count":0,"href":"https:\/\/nettsak.no\/en\/wp-json\/wp\/v2\/posts\/1633\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/nettsak.no\/en\/wp-json\/wp\/v2\/media\/1634"}],"wp:attachment":[{"href":"https:\/\/nettsak.no\/en\/wp-json\/wp\/v2\/media?parent=1633"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/nettsak.no\/en\/wp-json\/wp\/v2\/categories?post=1633"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/nettsak.no\/en\/wp-json\/wp\/v2\/tags?post=1633"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}