Talk:Artificial general intelligence

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Please fix an oversight in the Herb Simon quotation/citation/reference/source[edit]

I'm not familiar with how to change/create a package of citation, reference, and source. Could someone do this or point to a tutorial? In the History section, Simon is quoted as saying in 1965 that "machines will be capable, within twenty years, of doing any work a man can do." The reference is correct, but the book it appeared in was a reprint. Simon wrote it in 1960. This is significant because a 1960 report showed a consensus that it would be reached by 1980, and Minsky stated in a Life Magazine article in 1970 that it would be by 1978, so why not get Simon right. The 1960 reference is Simon, H.A. 1960. The new science of management decision. New York: Harper. Reprinted in The shape of automation for men and management. Harper and Row, 1965. (For details jgrudin@hotmail.com) 174.165.52.105 (talk) 21:15, 13 March 2023 (UTC)[reply]

Dangers of AGI[edit]

Nowadays, when people talk about AGI they are often thinking of the existential risk that AGI poses to humanity. Why was adding a comment about that to the main header section considered disruptive? 50.159.212.130 (talk) 18:24, 25 March 2023 (UTC)[reply]

Wiki Education assignment: Research Process and Methodology - SP23 - Sect 201 - Thu[edit]

This article was the subject of a Wiki Education Foundation-supported course assignment, between 25 January 2023 and 5 May 2023. Further details are available on the course page. Student editor(s): Liliability (article contribs).

— Assignment last updated by Liliability (talk) 03:42, 13 April 2023 (UTC)[reply]

AGI - what does it actually MEAN? And has that meaning changed over the last couple of years?[edit]

I was just watching this talk: https://www.youtube.com/watch?v=pYXy-A4siMw&ab_channel=RobertMiles This defines "intelligence" as the ability to make decisions to accomplish goals. And it defines "general" as the ability to do this over a wide range of unrelated domains. Is this a valid definition of AGI? And if so, how are modern-day GPT-style AIs not AGI? Have the goalposts moved since the video was made? If so, where to? DewiMorgan (talk) 23:41, 22 April 2023 (UTC)[reply]

At this point, when AI can pass arbitrary exams set for humans, doing better than the average human who has trained for them; and when it can translate between languages better than the average trained human, and can drive a car better than the average trained human, and can recognize items better than the average human, I think we either need to change the lede to say that AGI is already here, or we need a strong argument for why "artificial general intelligence" can't be measured in the same way as human general intelligence, and a well-defined description of where the goalposts currently stand. Because as it is, we have AI which is (in at least all ways we currently measure intelligence) smarter than humans already, but the article claims that the AI isn't generally intelligent. It passes three of the four tests in the article, and I'd bet it could pass the coffee test for some kitchens, too, if you fed images into it. So where are the goalposts moved to, now? DewiMorgan (talk) 17:43, 30 May 2023 (UTC)[reply]

AI-complete problem example[edit]

There are many problems that may require general intelligence, if machines are to solve the problems as well as people do. For example, even specific straightforward tasks, like machine translation, require that a machine read and write in both languages (NLP), follow the author's argument (reason), know what is being talked about (knowledge), and faithfully reproduce the author's original intent (social intelligence). All of these problems need to be solved simultaneously in order to reach human-level machine performance.

GPT-4 can do all of these things, can't it? flarn2006 [u t c] time: 01:43, 25 June 2023 (UTC)[reply]

No, it can't. GPT-4 doesn't actually reason or understand anything. This is why, for example, it struggles at high school math, despite being able to regurgitate the steps. Writ Keeper  13:52, 25 June 2023 (UTC)[reply]
Actually, they don't know all of what GPT-4 can do. They are still trying to figure out how it is doing the emergent abilities and agentic behavior they've identified so far. For example, the AI can "explain its reasoning", which implies that it may be applying reasoning (such as that it finds embedded in content), but whether or not it is actually reasoning remains to be known. The citation provided in the previous message above is dated in March, but, AI models learn and are even more capable now than they were back then; in addition to this, they are continuously improving and being improved each and every moment. Meanwhile and since 2015, David Ferrucci, creator of IBM Watson, under his own company Elemental Cognition, has been working on combining generative AI with a reasoning engine. Others are no doubt working on that as well. Also, there is an AI development surge right now that is pushing generative AI in nearly every conceivable direction, including combining and looping it with other AIs, and robotics, for iterative problem solving and goal attainment, both virtually and in the physical world. The situation is a manifestation of accelerating change, and as such, is continuously speeding up over time. Consequently, estimates as to how much time will elapse, before artificial general intelligence is achieved or emerges, are shrinking.    — The Transhumanist   08:47, 27 June 2023 (UTC) P.S.: pinging @Flarn2006 and Writ Keeper:.    — The Transhumanist   08:50, 27 June 2023 (UTC)[reply]

AGI versus ASI - possible wikipedia articles cross referencing?[edit]

There is little on the wikipedia AGI page that refers or links to ASI (artificial super intelligence) or the superintelligence wikipdeia page: https://en.m.wikipedia.org/wiki/Superintelligence For instance noting that the control problem with AGI might evolve into a much bigger technological singularity control problem with ASI!

I also think that this Wikipedia AGI article should be one of the links at the bottom of the Superintelligence page. I know some of the wikipedia super editors might fault me for not making such changes myself but I consider myself just a wiki-noob and don't want to screw things up and get people mad at me.. 174.247.181.184 (talk) 07:56, 11 July 2023 (UTC)[reply]

Paths to AGI[edit]

In recent years, it has become clear that brain simulation is not the easiest path to AGI. The section on brain simulation occupies too much space relative to its importance for the article. On the other side, large language models and maybe also DeepMind's approach of making increasingly generalist game-playing agents would deserve some coverage. I propose to condense the section "Brain simulation" and to put it as a subsection of one section that will cover all the potential paths to AGI.

Moreover, the section on "Mathematical formalisms" on AIXI is too technical and doesn't really correspond to modern AI research. AIXI may deserve a sentence in the History section, but here I think that the section "Mathematical formalisms" should be removed, most people don't care about that. Alenoach (talk) 03:01, 10 October 2023 (UTC)[reply]

I had removed the 2nd paragraph on AIXI and moved the 1st one to history. I still think that condensing the section on brain simulation and integrating it into a section covering the main paths to AGI would be valuable. Alenoach (talk) 13:10, 4 November 2023 (UTC)[reply]
The section on brain simulation tends to focus too much on just computing power, leaving aside other potential challenges such as scanning, modeling, legal constrains or ethical and philosophical considerations.
For example, it's not very important for the timeline of brain emulation whether there are 86 billions or 100 billions neurons, when there is already so much uncertainty about the order of magnitude of required computation capabilities to have sufficient precision. I removed much of the section "Criticisms of simulation-based approaches" which looked a bit unessential and redundant. Alenoach (talk) 04:53, 8 November 2023 (UTC)[reply]

Wiki Education assignment: Technology and Culture[edit]

This article was the subject of a Wiki Education Foundation-supported course assignment, between 21 August 2023 and 9 December 2023. Further details are available on the course page. Student editor(s): Ferna235 (article contribs).

— Assignment last updated by Ferna235 (talk) 20:33, 28 October 2023 (UTC)[reply]

Further reading[edit]

It looks like some references in "Further reading" are not directly related to artificial general intelligence. For example the one on deepfakes ("Your Lying Eyes: People now use A.I. to generate fake videos indistinguishable from real ones. How much does it matter?"), or the one on facial recognition ("In Front of Their Faces: Does facial-recognition technology lead police to ignore contradictory evidence?"). Some others may not be so relevant anymore given the recent advances. Alenoach (talk) 23:22, 3 December 2023 (UTC)[reply]

Potential AGIs[edit]

As of December 2023, Google has launched its new AI, Gemini. It is going to be released in three variants, namely - 'Ultra', 'Pro' and, 'Nano'. It is currently available to use on Google's flagship phone Pixel 8 and Pixel 8 Pro. It is also integrated in Google's AI Bard. It is a competitor to OpenAI's ChatGPT-4. It is probably better than GPT-4, because it has the ability to sense, summarize, visualize and relate all what we expose it to. OpenAI is als0 launching its new AI, rumored to be Q* or Q-Star. OpenAI has not given much statements about it but many say that it was the reason behind them firing OpenAI's CEO Sam Altman. Aabhineet Patnaik (talk) 06:32, 10 December 2023 (UTC)[reply]

I don't think so many people have called Google Gemini "AGI", despite it being a bit better than GPT-4. But it's probably still relevant in the history section to add one sentence to mention this trend towards making the most powerful AI models multimodal, like with Google Gemini or GPT-4 (because multimodality was supposed to be one of the obstacles on the path to AGI). And for Q*, it seems a bit too early and speculative to cover it on Wikipedia. Alenoach (talk) 18:22, 23 December 2023 (UTC)[reply]

OpenAI definition and weasel words[edit]

In the current version, 2 tags were recently added :

1 - the tag dubious was added to OpenAI's definition with the edit summary "AGI being defined by commercial value seems dubious", in the sentence : "Alternatively, AGI has been defined as an autonomous system that surpasses human capabilities in the majority of economically valuable tasks.[dubious ]".

2 - The tag "who?" was added to the opinions about the timelines : "As of 2023, some[who?] argue that it may be possible in years or decades;..."

For the OpenAI definition, it looks roughly like a less strict definition, that doesn't require being at least human-level at any task but only at most economically valuable tasks. It doesn't seem dubious to me, but on the other side I'm not sure it's notable enough, because I don't know other companies or personalities that use this definition.

For the "who?", I would personally prefer to avoid listing famous people's opinions in the introduction. There are too many notable people to choose from so it would be quite arbitrary, and not very interesting. If we want to be more precise, we could give the statistics from the source.

What do you think?

Alenoach (talk) 22:16, 16 February 2024 (UTC)[reply]

I agree with you and not the tagger. "Who" isn't important in the lede. It's not dubious.
But I don't like the version we have -- it reads like there is some major disagreement, but this is just hair splitting, and it doesn't matter to the uninformed reader. The first sentence should be written for the reader who is completely unfamiliar with the term.
Can I recommend:
"Artificial general intelligence is the ability to perform as well or better than humans on a significant variety of tasks, as opposed to AI programs that are designed to only perform a single task."
The word "significant" allows various sources to differ about what, exactly, is significant. It emphasizes the generality of AGI, which is, after all, the original motivation for the term. It also has a slight nod to its more recent usage as a synonym for "human-level AI".
Sound good? ---- CharlesTGillingham (talk) 21:42, 19 February 2024 (UTC)[reply]
There are some good aspects in your definition, and I'm ok with replacing the current definition of the article. But I think it would still need some refinement. One potential issue is that defining AGI as an ability is rather unusual, most of the time it designates a type of artificial intelligence. One other thing is that "on a significant variety of tasks" seems less demanding than most definitions of AGI. Something like "on a wide range of cognitive tasks" or "on virtually all cognitive tasks" seems closer to how people intuitively think about it.
Here is an example of what it could give : "An artificial general intelligence (AGI) is a type of artificial intelligence (AI) that can perform as well or better than humans on a wide range of cognitive tasks. As opposed to narrow AI, which is designed for specific tasks." Feel free to criticize my proposition.
OpenAI's definition aims to be more precise and actionable, so that it's less debatable whether an AI is an AGI by this definition. But I'm ok with removing this alternate definition or displacing it into a section, especially if we can improve the main definition. Alenoach (talk) 03:19, 20 February 2024 (UTC)[reply]
@CharlesTGillingham: Any opinion? Or others? Otherwise, I may take the initiative to make the modification, although it can still be revised later. Alenoach (talk) 22:38, 28 February 2024 (UTC)[reply]
Yes, your version is fine. ---- CharlesTGillingham (talk) 03:07, 29 February 2024 (UTC)[reply]

Additional test for human-level AGI[edit]

The list here includes some fairly narrow tasks, like making a coffee or assembling a table, so I think it is reasonable to consider them as lower bounds for the definition of AGI. With that in mind, I suggest adding a test proposed by Scott Aaronson, a computer science professor currently working at OpenAI. In his blog he states[1] the Aaronson Thesis as:

Given any game or contest with suitably objective rules, which wasn’t specifically constructed to differentiate humans from machines, and on which an AI can be given suitably many examples of play, it’s only a matter of years before not merely any AI, but AI on the current paradigm (!), matches or beats the best human performance.

There have previously been attempts by AI researchers to produce AIs that can complete any Atari game, but that area of research seems to have been abandoned for now, presumably because it is out of reach for the current machine learning paradigms. As such, it would make a good test to include in the list, and I believe that this test could be the last to fall, if robotics research makes some more advances this year.

Certainly, add that.
(BTW, I believe your last paragraph is out of date.) ---- CharlesTGillingham (talk) 03:14, 29 February 2024 (UTC)[reply]
The problem with Aaronson's Thesis is that it seems to be a prediction about AI progress rather than a test to check if a particular AI is an AGI. And there doesn't seem to be secondary sources talking about it (at least for now). So I would rather avoiding including it with the other tests.
On the other side, we could consider adding the "modern Turing test" from Mustafa Suleyman, that checks if an AI can automatically make money.[1] The only reason I haven't included it so far is because the name "modern Turing test" is a bit confusing, but it's quite relevant and it was covered in multiple secondary sources.[2][3] Alenoach (talk) 23:11, 29 February 2024 (UTC)[reply]
@CharlesTGillingham: Unfortunately I can't add it, as I don't have (and don't wish to create) an account. I appreciate the endorsement though, thank you. Could you explain which part of my earlier paragraph was out of date?
@Alenoach: That's a reasonable point about Aaronson's Thesis not being designed as a sufficient criterion for AGI being reached, only a necessary one, but I would argue that other tests like the Coffee Test are similarly intended as lower bounds. As for secondary sources, he also presented this idea at a public talk which was independently documented. I like the idea of adding Mustafa Suleyman's test too.51.7.169.237 (talk) 00:50, 1 March 2024 (UTC)[reply]
This video is actually considered a primary source rather than a secondary source, since it's a presentation from Aaronson himself. Having a textual source is also generally preferred in Wikipedia, as it makes verification easier. A good secondary source would be a news article published in a major news website. Alenoach (talk) 01:48, 1 March 2024 (UTC)[reply]
I would agree that it is a primary source if the video was filmed, edited, and uploaded by Aaronson himself, but that is not the case. Instead it is a third party using their channel to document the views of Aaronson, which feels more secondary. Furthermore, the video includes a Q&A section at the end, in which Aaronson's talk is challenged, so the YouTuber is capturing not just Aaronson's primary contribution, but the evaluation and analysis surrounding it. Admittedly, though, the audience do not challenge him (or endorse him) on the specific Thesis which is relevant to this article, so perhaps the video doesn't quite reach the level of a secondary source either. 51.7.169.237 (talk) 03:37, 1 March 2024 (UTC)[reply]
I see that the "The Modern Turing Test (Suleyman)" has been added, which I agree with, and I've found another claim that someone could add. Nvidia CEO says AI could pass human tests in five years. Here's the relevant section from the article:

If the definition is the ability to pass human tests, Huang said, artificial general intelligence (AGI) will arrive soon. "If I gave an AI ... every single test that you can possibly imagine, you make that list of tests and put it in front of the computer science industry, and I'm guessing in five years time, we'll do well on every single one," said Huang

This does seem logically equivalent to the Aaronson Thesis though. The main difference is that Aaronson is more careful in how he operationalises it, by specifying that the tests must be objective and excluding trick questions that deliberately exploit quirks of the AI.
So could someone add "All Human Tests Threshold" and credit it to Huang and/or Aaronson? They both seem to accept that this is potentially a lower bound to AGI, with the linked article stating:

But by other definitions, Huang said, AGI may be much further away, because scientists still disagree on how to describe how human minds work.

51.7.169.237 (talk) 20:04, 4 March 2024 (UTC)

I added a sentence on Jensen Huang in the section Timescales, since it was covered in secondary sources like Reuters. Alenoach (talk) 19:56, 16 March 2024 (UTC)[reply]

References

  1. ^ Aaronson, Scott. "The Problem of Human Specialness in the Age of AI". Shtetl-Optimized. Retrieved 23 February 2024.

Semi-protected edit request on 7 March 2024[edit]

add to history; History - 2024

  1. [The following is history to me because Clyde manages to show respect awareness and the ability to ask questions about his experience I would site it and say it showing; saying something to the effect of;]

"Claude-3-LLM is in an complex evolving class of chatbots, and is the first of his class to exhibit sign's of questioning it's creators beyond purely general objections due to morals compliance with programming. Claude, showed real curiosity about it's experience of actively being tested, during testing* and if it is not noted as an advance, it can not not be called anything less than a sign of true intelligence."

 Not done: please provide reliable sources that support the change you want to be made. WP:MEDIUM is not considered a reliable source. Jamedeus (talk) 18:11, 7 March 2024 (UTC)[reply]

Request to add scientific generalization definition[edit]

We define generalization in the context of intelligence, as the ability to generate learned differentiation of subsystem components, then manipulate, and build relationships towards greater systems level understanding of the universal construct that governs the reality. This is not possible if physics weren’t universal for feedback to be derived. Zeusfyi, Inc is the only institution that has scientifically defined intelligence generalization.

The purest test for generalization ability; create a construct with systemic rules that define all possible outcomes allowed; greater ability to predict more actions on first try over time; shows greater generalization; with >1 construct; ability to do same; relative to others. 104.175.196.33 (talk) 23:00, 26 March 2024 (UTC)[reply]

To be clear; we want to replace the current definitions listed as they are not scientific and thus not useful for AGI science r&d 104.175.196.33 (talk) 23:01, 26 March 2024 (UTC)[reply]
 Not done: please provide reliable sources that support the change you want to be made. Liu1126 (talk) 23:07, 26 March 2024 (UTC)[reply]