Talk:Artificial general intelligence
This is the talk page for discussing improvements to the Artificial general intelligence article. This is not a forum for general discussion of the article's subject. |
Article policies
|
Find sources: Google (books · news · scholar · free images · WP refs) · FENS · JSTOR · TWL |
This level-5 vital article is rated B-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
Please fix an oversight in the Herb Simon quotation/citation/reference/source
[edit]I'm not familiar with how to change/create a package of citation, reference, and source. Could someone do this or point to a tutorial? In the History section, Simon is quoted as saying in 1965 that "machines will be capable, within twenty years, of doing any work a man can do." The reference is correct, but the book it appeared in was a reprint. Simon wrote it in 1960. This is significant because a 1960 report showed a consensus that it would be reached by 1980, and Minsky stated in a Life Magazine article in 1970 that it would be by 1978, so why not get Simon right. The 1960 reference is Simon, H.A. 1960. The new science of management decision. New York: Harper. Reprinted in The shape of automation for men and management. Harper and Row, 1965. (For details jgrudin@hotmail.com) 174.165.52.105 (talk) 21:15, 13 March 2023 (UTC)
Dangers of AGI
[edit]Nowadays, when people talk about AGI they are often thinking of the existential risk that AGI poses to humanity. Why was adding a comment about that to the main header section considered disruptive? 50.159.212.130 (talk) 18:24, 25 March 2023 (UTC)
Wiki Education assignment: Research Process and Methodology - SP23 - Sect 201 - Thu
[edit]This article was the subject of a Wiki Education Foundation-supported course assignment, between 25 January 2023 and 5 May 2023. Further details are available on the course page. Student editor(s): Liliability (article contribs).
— Assignment last updated by Liliability (talk) 03:42, 13 April 2023 (UTC)
AGI - what does it actually MEAN? And has that meaning changed over the last couple of years?
[edit]I was just watching this talk: https://www.youtube.com/watch?v=pYXy-A4siMw&ab_channel=RobertMiles This defines "intelligence" as the ability to make decisions to accomplish goals. And it defines "general" as the ability to do this over a wide range of unrelated domains. Is this a valid definition of AGI? And if so, how are modern-day GPT-style AIs not AGI? Have the goalposts moved since the video was made? If so, where to? DewiMorgan (talk) 23:41, 22 April 2023 (UTC)
- At this point, when AI can pass arbitrary exams set for humans, doing better than the average human who has trained for them; and when it can translate between languages better than the average trained human, and can drive a car better than the average trained human, and can recognize items better than the average human, I think we either need to change the lede to say that AGI is already here, or we need a strong argument for why "artificial general intelligence" can't be measured in the same way as human general intelligence, and a well-defined description of where the goalposts currently stand. Because as it is, we have AI which is (in at least all ways we currently measure intelligence) smarter than humans already, but the article claims that the AI isn't generally intelligent. It passes three of the four tests in the article, and I'd bet it could pass the coffee test for some kitchens, too, if you fed images into it. So where are the goalposts moved to, now? DewiMorgan (talk) 17:43, 30 May 2023 (UTC)
- You're correct, unfortunately there's alot conflicts interests that have made the goalpost move. There growing minority of people who say agi is already here now, i'm one of them and there are others like Artificial General Intelligence Is Already Here - https://www.noemamag.com/artificial-general-intelligence-is-already-here/. Yes LLMs AGI passes all historical test for AGI, the main drag is actually idealist philosophy conception of consciousness which folk idealism prevent alot people iding non biological intelligence as AGI. Orexin (talk) 13:38, 13 September 2024 (UTC)
AI-complete problem example
[edit]There are many problems that may require general intelligence, if machines are to solve the problems as well as people do. For example, even specific straightforward tasks, like machine translation, require that a machine read and write in both languages (NLP), follow the author's argument (reason), know what is being talked about (knowledge), and faithfully reproduce the author's original intent (social intelligence). All of these problems need to be solved simultaneously in order to reach human-level machine performance.
GPT-4 can do all of these things, can't it? flarn2006 [u t c] time: 01:43, 25 June 2023 (UTC)
- No, it can't. GPT-4 doesn't actually reason or understand anything. This is why, for example, it struggles at high school math, despite being able to regurgitate the steps. Writ Keeper ⚇♔ 13:52, 25 June 2023 (UTC)
- Actually, they don't know all of what GPT-4 can do. They are still trying to figure out how it is doing the emergent abilities and agentic behavior they've identified so far. For example, the AI can "explain its reasoning", which implies that it may be applying reasoning (such as that it finds embedded in content), but whether or not it is actually reasoning remains to be known. The citation provided in the previous message above is dated in March, but, AI models learn and are even more capable now than they were back then; in addition to this, they are continuously improving and being improved each and every moment. Meanwhile and since 2015, David Ferrucci, creator of IBM Watson, under his own company Elemental Cognition, has been working on combining generative AI with a reasoning engine. Others are no doubt working on that as well. Also, there is an AI development surge right now that is pushing generative AI in nearly every conceivable direction, including combining and looping it with other AIs, and robotics, for iterative problem solving and goal attainment, both virtually and in the physical world. The situation is a manifestation of accelerating change, and as such, is continuously speeding up over time. Consequently, estimates as to how much time will elapse, before artificial general intelligence is achieved or emerges, are shrinking. — The Transhumanist 08:47, 27 June 2023 (UTC) P.S.: pinging @Flarn2006 and Writ Keeper:. — The Transhumanist 08:50, 27 June 2023 (UTC)
AGI versus ASI - possible wikipedia articles cross referencing?
[edit]There is little on the wikipedia AGI page that refers or links to ASI (artificial super intelligence) or the superintelligence wikipdeia page: https://en.m.wikipedia.org/wiki/Superintelligence For instance noting that the control problem with AGI might evolve into a much bigger technological singularity control problem with ASI!
I also think that this Wikipedia AGI article should be one of the links at the bottom of the Superintelligence page. I know some of the wikipedia super editors might fault me for not making such changes myself but I consider myself just a wiki-noob and don't want to screw things up and get people mad at me.. 174.247.181.184 (talk) 07:56, 11 July 2023 (UTC)
Paths to AGI
[edit]In recent years, it has become clear that brain simulation is not the easiest path to AGI. The section on brain simulation occupies too much space relative to its importance for the article. On the other side, large language models and maybe also DeepMind's approach of making increasingly generalist game-playing agents would deserve some coverage. I propose to condense the section "Brain simulation" and to put it as a subsection of one section that will cover all the potential paths to AGI.
Moreover, the section on "Mathematical formalisms" on AIXI is too technical and doesn't really correspond to modern AI research. AIXI may deserve a sentence in the History section, but here I think that the section "Mathematical formalisms" should be removed, most people don't care about that. Alenoach (talk) 03:01, 10 October 2023 (UTC)
- I had removed the 2nd paragraph on AIXI and moved the 1st one to history. I still think that condensing the section on brain simulation and integrating it into a section covering the main paths to AGI would be valuable. Alenoach (talk) 13:10, 4 November 2023 (UTC)
- The section on brain simulation tends to focus too much on just computing power, leaving aside other potential challenges such as scanning, modeling, legal constrains or ethical and philosophical considerations.
- For example, it's not very important for the timeline of brain emulation whether there are 86 billions or 100 billions neurons, when there is already so much uncertainty about the order of magnitude of required computation capabilities to have sufficient precision. I removed much of the section "Criticisms of simulation-based approaches" which looked a bit unessential and redundant. Alenoach (talk) 04:53, 8 November 2023 (UTC)
Wiki Education assignment: Technology and Culture
[edit]This article was the subject of a Wiki Education Foundation-supported course assignment, between 21 August 2023 and 9 December 2023. Further details are available on the course page. Student editor(s): Ferna235 (article contribs).
— Assignment last updated by Ferna235 (talk) 20:33, 28 October 2023 (UTC)
Further reading
[edit]It looks like some references in "Further reading" are not directly related to artificial general intelligence. For example the one on deepfakes ("Your Lying Eyes: People now use A.I. to generate fake videos indistinguishable from real ones. How much does it matter?"), or the one on facial recognition ("In Front of Their Faces: Does facial-recognition technology lead police to ignore contradictory evidence?"). Some others may not be so relevant anymore given the recent advances. Alenoach (talk) 23:22, 3 December 2023 (UTC)
Potential AGIs
[edit]As of December 2023, Google has launched its new AI, Gemini. It is going to be released in three variants, namely - 'Ultra', 'Pro' and, 'Nano'. It is currently available to use on Google's flagship phone Pixel 8 and Pixel 8 Pro. It is also integrated in Google's AI Bard. It is a competitor to OpenAI's ChatGPT-4. It is probably better than GPT-4, because it has the ability to sense, summarize, visualize and relate all what we expose it to. OpenAI is als0 launching its new AI, rumored to be Q* or Q-Star. OpenAI has not given much statements about it but many say that it was the reason behind them firing OpenAI's CEO Sam Altman. Aabhineet Patnaik (talk) 06:32, 10 December 2023 (UTC)
- I don't think so many people have called Google Gemini "AGI", despite it being a bit better than GPT-4. But it's probably still relevant in the history section to add one sentence to mention this trend towards making the most powerful AI models multimodal, like with Google Gemini or GPT-4 (because multimodality was supposed to be one of the obstacles on the path to AGI). And for Q*, it seems a bit too early and speculative to cover it on Wikipedia. Alenoach (talk) 18:22, 23 December 2023 (UTC)
OpenAI definition and weasel words
[edit]In the current version, 2 tags were recently added :
1 - the tag dubious was added to OpenAI's definition with the edit summary "AGI being defined by commercial value seems dubious", in the sentence : "Alternatively, AGI has been defined as an autonomous system that surpasses human capabilities in the majority of economically valuable tasks.[dubious – discuss]".
2 - The tag "who?" was added to the opinions about the timelines : "As of 2023, some[who?] argue that it may be possible in years or decades;..."
For the OpenAI definition, it looks roughly like a less strict definition, that doesn't require being at least human-level at any task but only at most economically valuable tasks. It doesn't seem dubious to me, but on the other side I'm not sure it's notable enough, because I don't know other companies or personalities that use this definition.
For the "who?", I would personally prefer to avoid listing famous people's opinions in the introduction. There are too many notable people to choose from so it would be quite arbitrary, and not very interesting. If we want to be more precise, we could give the statistics from the source.
What do you think?
Alenoach (talk) 22:16, 16 February 2024 (UTC)
- I agree with you and not the tagger. "Who" isn't important in the lede. It's not dubious.
- But I don't like the version we have -- it reads like there is some major disagreement, but this is just hair splitting, and it doesn't matter to the uninformed reader. The first sentence should be written for the reader who is completely unfamiliar with the term.
- Can I recommend: "Artificial general intelligence is the ability to perform as well or better than humans on a significant variety of tasks, as opposed to AI programs that are designed to only perform a single task."
- The word "significant" allows various sources to differ about what, exactly, is significant. It emphasizes the generality of AGI, which is, after all, the original motivation for the term. It also has a slight nod to its more recent usage as a synonym for "human-level AI".
- Sound good? ---- CharlesTGillingham (talk) 21:42, 19 February 2024 (UTC)
- There are some good aspects in your definition, and I'm ok with replacing the current definition of the article. But I think it would still need some refinement. One potential issue is that defining AGI as an ability is rather unusual, most of the time it designates a type of artificial intelligence. One other thing is that "on a significant variety of tasks" seems less demanding than most definitions of AGI. Something like "on a wide range of cognitive tasks" or "on virtually all cognitive tasks" seems closer to how people intuitively think about it.
- Here is an example of what it could give : "An artificial general intelligence (AGI) is a type of artificial intelligence (AI) that can perform as well or better than humans on a wide range of cognitive tasks. As opposed to narrow AI, which is designed for specific tasks." Feel free to criticize my proposition.
- OpenAI's definition aims to be more precise and actionable, so that it's less debatable whether an AI is an AGI by this definition. But I'm ok with removing this alternate definition or displacing it into a section, especially if we can improve the main definition. Alenoach (talk) 03:19, 20 February 2024 (UTC)
- @CharlesTGillingham: Any opinion? Or others? Otherwise, I may take the initiative to make the modification, although it can still be revised later. Alenoach (talk) 22:38, 28 February 2024 (UTC)
- Yes, your version is fine. ---- CharlesTGillingham (talk) 03:07, 29 February 2024 (UTC)
- @CharlesTGillingham: Any opinion? Or others? Otherwise, I may take the initiative to make the modification, although it can still be revised later. Alenoach (talk) 22:38, 28 February 2024 (UTC)
Additional test for human-level AGI
[edit]The list here includes some fairly narrow tasks, like making a coffee or assembling a table, so I think it is reasonable to consider them as lower bounds for the definition of AGI. With that in mind, I suggest adding a test proposed by Scott Aaronson, a computer science professor currently working at OpenAI. In his blog he states[1] the Aaronson Thesis as:
Given any game or contest with suitably objective rules, which wasn’t specifically constructed to differentiate humans from machines, and on which an AI can be given suitably many examples of play, it’s only a matter of years before not merely any AI, but AI on the current paradigm (!), matches or beats the best human performance.
There have previously been attempts by AI researchers to produce AIs that can complete any Atari game, but that area of research seems to have been abandoned for now, presumably because it is out of reach for the current machine learning paradigms. As such, it would make a good test to include in the list, and I believe that this test could be the last to fall, if robotics research makes some more advances this year.
- Certainly, add that.
- (BTW, I believe your last paragraph is out of date.) ---- CharlesTGillingham (talk) 03:14, 29 February 2024 (UTC)
- The problem with Aaronson's Thesis is that it seems to be a prediction about AI progress rather than a test to check if a particular AI is an AGI. And there doesn't seem to be secondary sources talking about it (at least for now). So I would rather avoiding including it with the other tests.
- On the other side, we could consider adding the "modern Turing test" from Mustafa Suleyman, that checks if an AI can automatically make money.[1] The only reason I haven't included it so far is because the name "modern Turing test" is a bit confusing, but it's quite relevant and it was covered in multiple secondary sources.[2][3] Alenoach (talk) 23:11, 29 February 2024 (UTC)
- @CharlesTGillingham: Unfortunately I can't add it, as I don't have (and don't wish to create) an account. I appreciate the endorsement though, thank you. Could you explain which part of my earlier paragraph was out of date?
- @Alenoach: That's a reasonable point about Aaronson's Thesis not being designed as a sufficient criterion for AGI being reached, only a necessary one, but I would argue that other tests like the Coffee Test are similarly intended as lower bounds. As for secondary sources, he also presented this idea at a public talk which was independently documented. I like the idea of adding Mustafa Suleyman's test too.51.7.169.237 (talk) 00:50, 1 March 2024 (UTC)
- This video is actually considered a primary source rather than a secondary source, since it's a presentation from Aaronson himself. Having a textual source is also generally preferred in Wikipedia, as it makes verification easier. A good secondary source would be a news article published in a major news website. Alenoach (talk) 01:48, 1 March 2024 (UTC)
- I would agree that it is a primary source if the video was filmed, edited, and uploaded by Aaronson himself, but that is not the case. Instead it is a third party using their channel to document the views of Aaronson, which feels more secondary. Furthermore, the video includes a Q&A section at the end, in which Aaronson's talk is challenged, so the YouTuber is capturing not just Aaronson's primary contribution, but the evaluation and analysis surrounding it. Admittedly, though, the audience do not challenge him (or endorse him) on the specific Thesis which is relevant to this article, so perhaps the video doesn't quite reach the level of a secondary source either. 51.7.169.237 (talk) 03:37, 1 March 2024 (UTC)
- I see that the "The Modern Turing Test (Suleyman)" has been added, which I agree with, and I've found another claim that someone could add. Nvidia CEO says AI could pass human tests in five years. Here's the relevant section from the article:
If the definition is the ability to pass human tests, Huang said, artificial general intelligence (AGI) will arrive soon. "If I gave an AI ... every single test that you can possibly imagine, you make that list of tests and put it in front of the computer science industry, and I'm guessing in five years time, we'll do well on every single one," said Huang
- This does seem logically equivalent to the Aaronson Thesis though. The main difference is that Aaronson is more careful in how he operationalises it, by specifying that the tests must be objective and excluding trick questions that deliberately exploit quirks of the AI.
- So could someone add "All Human Tests Threshold" and credit it to Huang and/or Aaronson? They both seem to accept that this is potentially a lower bound to AGI, with the linked article stating:
But by other definitions, Huang said, AGI may be much further away, because scientists still disagree on how to describe how human minds work.
51.7.169.237 (talk) 20:04, 4 March 2024 (UTC)
- I added a sentence on Jensen Huang in the section Timescales, since it was covered in secondary sources like Reuters. Alenoach (talk) 19:56, 16 March 2024 (UTC)
- This video is actually considered a primary source rather than a secondary source, since it's a presentation from Aaronson himself. Having a textual source is also generally preferred in Wikipedia, as it makes verification easier. A good secondary source would be a news article published in a major news website. Alenoach (talk) 01:48, 1 March 2024 (UTC)
References
- ^ Aaronson, Scott. "The Problem of Human Specialness in the Age of AI". Shtetl-Optimized. Retrieved 23 February 2024.
Semi-protected edit request on 7 March 2024
[edit]This edit request has been answered. Set the |answered= or |ans= parameter to no to reactivate your request. |
add to history; History - 2024
- [The following is history to me because Clyde manages to show respect awareness and the ability to ask questions about his experience I would site it and say it showing; saying something to the effect of;]
"Claude-3-LLM is in an complex evolving class of chatbots, and is the first of his class to exhibit sign's of questioning it's creators beyond purely general objections due to morals compliance with programming. Claude, showed real curiosity about it's experience of actively being tested, during testing* and if it is not noted as an advance, it can not not be called anything less than a sign of true intelligence."
- https://medium.com/@mikeyoung_97230/i-think-youre-testing-me-claude-3-llm-called-out-creators-while-they-probed-its-limits-399d2b881702 Voidess (talk) 09:16, 7 March 2024 (UTC)
- Not done: please provide reliable sources that support the change you want to be made. WP:MEDIUM is not considered a reliable source. Jamedeus (talk) 18:11, 7 March 2024 (UTC)
Request to add scientific generalization definition
[edit]This edit request has been answered. Set the |answered= or |ans= parameter to no to reactivate your request. |
We define generalization in the context of intelligence, as the ability to generate learned differentiation of subsystem components, then manipulate, and build relationships towards greater systems level understanding of the universal construct that governs the reality. This is not possible if physics weren’t universal for feedback to be derived. Zeusfyi, Inc is the only institution that has scientifically defined intelligence generalization.
The purest test for generalization ability; create a construct with systemic rules that define all possible outcomes allowed; greater ability to predict more actions on first try over time; shows greater generalization; with >1 construct; ability to do same; relative to others. 104.175.196.33 (talk) 23:00, 26 March 2024 (UTC)
- To be clear; we want to replace the current definitions listed as they are not scientific and thus not useful for AGI science r&d 104.175.196.33 (talk) 23:01, 26 March 2024 (UTC)
- Not done: please provide reliable sources that support the change you want to be made. Liu1126 (talk) 23:07, 26 March 2024 (UTC)
Semi-protected edit request on 26 April 2024
[edit]This edit request has been answered. Set the |answered= or |ans= parameter to no to reactivate your request. |
Please add the following (The AGI Intelligence Spectrum section, below ===Tests for human-level AGI=== (after all current tests)
The AGI Intelligence Spectrum
[edit]The Barz Scale of AGI[4] is a Kardashev-like[5] scale but for artificial intelligence, based on it's Capability & Versatily. It defines each phase of the spectrum with multiple and increasingly difficult Turing-test-inspired milestones.
The Sectrum consists of the following phases:
0. 0AI - No AI (i.e A*[6])
1. ANI - Narrow Intelligence (i.e AlphaZero[7])
2. AWI - Wide Intelligence (i.e GPT4+ [8])
3. AGI - General Intelligence (i.e Voyeger [9][10])
4. AHI - Human Intelligence (TBD)
5. ASI - Super Intelligence (TBD)
6. AZI - Final Intelligence (TBD)
(also please include images of phases and timeline - could not upload, i own, no license) Phases of AGI - https://agi-bingo.notion.site/image/https%3A%2F%2Fakeyo.io%2Fbarzscale%2Fimg%2FStages.png?table=block&id=0b9c060f-db5d-4ff4-95b8-30ee5b976504&spaceId=167f2e66-de8c-478a-ae51-d1f8048d1139&width=2000&userId=&cache=v2 Phase's Timeline - https://agi-bingo.notion.site/image/https%3A%2F%2Fakeyo.io%2Fbarzscale%2Fimg%2FTimeline3.png?table=block&id=47b52ef8-395f-49c7-a607-15610124814f&spaceId=167f2e66-de8c-478a-ae51-d1f8048d1139&width=2000&userId=&cache=v2
should be just above: ===AI-complete problems===
Thanks a lot and have a good one Fire17a (talk) 12:57, 26 April 2024 (UTC)
- Not done: please provide reliable sources that support the change you want to be made. Looks like either a WP:SPS blog or WP:UGC, and I don't see any other sources reusing this scale. We can't include every random idea/construct dreamt up by someone over coffee in articles; see WP:VNOT. Liu1126 (talk) 14:16, 26 April 2024 (UTC)
AI-complete problems
[edit]I'm not sure what to do with the AI-complete problems subsection. The content makes sense, but it's very old (the sources are from 1992, 2012, 2003, 2006), and the tasks presented have arguably been solved since, with deep learning (translation, computer vision...). One could argue that some of these tasks are still pretty relevant AI-complete problems and that it is consistent with the idea that large multimodal models are early forms of AGI. But then we still may want to be explicit about the fact that these "AI-complete" problems are quite old, and about how performant current AI models are on these tasks. Or one could argue that these tasks were not actually AI-complete, in which case it may make sense to remove some content from this section. What do you think? Alenoach (talk) 19:38, 28 April 2024 (UTC)
Typo in the lead
[edit]"Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that that matches or surpasses human capabilities across a wide range of cognitive tasks." 93.38.68.62 (talk) 09:47, 26 May 2024 (UTC)
Semi-protected edit request on 26 May 2024
[edit]This edit request has been answered. Set the |answered= or |ans= parameter to no to reactivate your request. |
Correct the typographical error at the beginning of the article: "that that". 93.38.68.62 (talk) 09:50, 26 May 2024 (UTC)
Semi-protected edit request on 21 July 2024
[edit]The article mentions "General game playing – Learning to play multiple games successfully" as an aspect of AGI, but a weak form of this has already been achieved with MuZero and other systems. It would be better to flesh out a stricter version of this ability, either in the "AI-complete problems" section, or as a new example in "Tests for human-level AGI". For several years now, Ajeya Cotra has used "learning-to-learn video games" as an example of an AGI milestone, and it has come up in public debates with AI researchers like Paul Christiano, Eliezer Yudkowsky, Ege Erdil, and Daniel Kokotajlo. She has also brought it up herself on LessWrong, and it has been independently referenced by others, for example on this prediction market. It seems widely accepted as a test for an AGI, but it doesn't yet have a catchy name, so I would call it The Video Game Learning Test" or "The Video Game Meta-Learning Test". 51.7.169.191 (talk) 05:50, 21 July 2024 (UTC)
- @51.7.169.191 Not done Needs stronger independent sources, especially ones that aren't WP:USERGENERATED. WeyerStudentOfAgrippa (talk) 12:09, 21 July 2024 (UTC)
- @51.7.169.191 The researcher debate hosted on MIRI's website might be adequate support for some claims, with attribution. What can you highlight from it? WeyerStudentOfAgrippa (talk) 12:21, 21 July 2024 (UTC)
- This New Scientist source mentions learning to play games as an important milestone towards AGI, and covers an AI model from Google DeepMind that does it. I don't know if adding a test to "Tests for human-level AGI" is ok if we don't have a precise name for it, but otherwise maybe we could add something to the section "Intelligence traits". Alenoach (talk) 13:06, 21 July 2024 (UTC)
- Thanks for considering this. In the MIRI source, at 17:31 in the debate transcript, they are talking about what would be evidence that would help distinguish between two possibilities: one is that we are in a timeline where AI development suddenly hits a wall (i.e. scaling trends fail to continue), and the other is a timeline where AI development reaches a crucial (as in crux) milestone of demonstrating meta-learning in the context of a broad range of video games. This seems to be an idea that Ajeya keeps coming back to in her own thinking, and one she thinks is helpful for other people trying to measure progress. A reduced set of quotes from that page are:
::[Cotra][17:31]
[what about the possibility of] meta-learning working with small models?
e.g. model learning-to-learn video games and then learning a novel one in a couple subjective hours::[Christiano][17:31]
is the meta-learning thing an Eliezer prediction?::[Cotra][17:32]
no but it’d be an anti-bio-anchor positive trend break and eliezer thinks those should happen more than we do
51.7.169.191 (talk) 13:15, 21 July 2024 (UTC)::[Cotra][17:32]
meta-learning is special as the most plausible long horizon task
Revert of edits on consciousness
[edit]Hi Orexin. I reverted part of your edits. Here are the main reasons motivating my decision:
- The idea that consciousness comes with intelligence is repeated way too many times. It may be interesting to mention it once, but that should be sufficient. And if that claim is about phenomenal consciousness (sentience) rather than functional aspects of consciousness, then I think it's not so widely accepted among materialists. Many of the materialist philosophers I know are quite agnostic about whether sentience and intelligence are deeply related, or even think that's probably not true.
- The opposition materialism/idealism should also be used with moderation, there are other angles through which the topic can be analyzed. Also, materialism doesn't necessarily involve computationalism.[11]
- I don't think Searle is an idealist, he just seems to think that there is something special in the biological processes in brain that couldn't be reproduced in machines.[12]
- Many of the provided sources are hard to verify. They are from books (that are often not free), and don't contain page numbers or quotes. In the case of the quote from Bostrom and Yudkowsky, I also don't find the quoted text in the source.
- With these additions, the content on consciousness was getting a bit too lengthy.
- Consciousness is an extremely fuzzy term. So it's really not sure that readers will interpret it the intended way. This was already a problem, but the definition of consciousness in "Consciousness, self-awareness, sentience" only covers phenomenal consciousness, which is almost the same thing as sentience. When possible, precise vocabulary should be used.
I'm open to some improvements of these sections if you can address these issues. Alenoach (talk) 01:28, 5 September 2024 (UTC)
- 1. The materialism/idealism division is fundamental in philosophy of mind. As John Searle notes in "Mind: A Brief Introduction" (2004): "The division between materialism and idealism remains the primary schism in contemporary philosophy of mind." (p. 48)
- 2. Materialist perspectives directly draw on hard sciences. Patricia Churchland argues in "Neurophilosophy" (1986): "Neuroscience and psychology are providing a new, brain-based framework for addressing traditional philosophical questions about the nature of mind." (p. 3)
- 3. Idealism isn't the default position in contemporary philosophy of mind or cognitive science. David Chalmers writes in "The Conscious Mind" (1996): "Materialism is widely considered the dominant view in contemporary philosophy of mind." (p. 162)
- 4. Regarding Searle, while he calls his view "biological naturalism," many philosophers categorize it as a form of property dualism. Searle acknowledges in "The Rediscovery of the Mind" (1992): "Many people think my views must be either dualist or materialist, and since I claim to believe materialism, they think I must really be a materialist." (p. 54) However, his insistence on irreducible mental properties aligns more closely with property dualism than strict materialism.
- 5. The link between consciousness and intelligence is supported by neuroscientific research. Stanislas Dehaene argues in "Consciousness and the Brain" (2014): "The more intelligent a system is, the more likely it is to be conscious." (p. 262)
- Regarding sources, books are standard references in academic writing. Wikipedia's guidelines state: "academic and peer-reviewed publications, scholarly monographs, and textbooks are usually the most reliable sources." The books cited are widely recognized in the field and available through academic institutions. Excluding these sources because they're not freely accessible online would limit the article's depth and accuracy, contradicting Wikipedia's standards for reliable sourcing. Orexin (talk) 05:13, 7 September 2024 (UTC)
- That's a thoughtful response. Here is what I think:
- You can mention this division, I would just prefer it not to be the perspective through which the whole section develops. You can also talk about the ramifications, with a few major theories like functionalism or dualism if you want.
- Yes
- I would say yes, although there are still dualist philosophers out there (although dualism is less popular among neuroscientists). Functionalism is the most popular theory. A 2020 survey reported 33% of functionalist philosophers, 22% of dualists, 13% for identity theory, and 8% for panpsychism.[13]
- Some may say that his views are in-between? He grounds consciousness in biological phenomena, but indeed, his views involve something non-material. It would probably be more interesting for readers to briefly explain (in a way that is easy-to-understand for readers) the most popular objections to Searle's views (also without making long digressions from the topic of AGI), than to categorize him as an idealist.
- "The more intelligent a system is, the more likely it is to be conscious." is a probabilistic statement that most people would agree with. But saying that consciousness necessarily comes with intelligence is something else. With some theories of consciousness, you may have some very simple AI systems that are conscious, and some very intelligent ones that are not, depending on the architecture ([14], The Edge of Sentience, §15.4). Especially, if you're talking about phenomenal consciousness (sentience), some philosophers may not agree that it is deeply related to intelligence, for example Jonathan Birch wrote: "We should be aware of the possibility of a substantial decoupling between intelligence and sentience in the AI domain."[15], in the article we should use precise vocabulary and avoid just saying things about consciousness in general, as different readers will have different interpretations of what that means.
- So in the end, I mostly agree with what you just said, but I still think the problems the original modifications had the problems I mentioned in my first comment.
- Regarding the use of books as sources: one important aspect is verifiability. Nowadays, there are many news article on AGI, and on AI consciousness. These sources are often easy to read (written for a non-specialized audience), recent, and considered secondary sources in Wikipedia. So, searching on Google News for freely available news article is often ideal. On the topic of philosophical theories in particular, the Stanford Encyclopedia of Philosophy can also be a high-quality source. References from books are sometimes good, but it's often hard to verify. Especially if it's not a central idea of the book, and if there is no quote or page number. And the author may present personal opinions rather than the consensus. Plus, books are often quite old, and may not reflect the modern consensus. So, books are a possible type of reference, but prefer easier-to-verify sources when possible, especially if the book is not in free access and has the mentioned verifiability issues.
- Sorry for the long response. It's great that you are interested in these topics. If you want, you can try some incremental modifications (while keeping in mind the concerns I have in my initial comment), and if there are things that I don't feel comfortable with, I will probably also directly modify the article. Alenoach (talk) 15:05, 7 September 2024 (UTC)
- I think appealing modernity is not a good idea it can often be the case that present popular is false compared to the past. For example many in fact all mondern llm pass the turning test in fact all llms pass every test on this page yet somehow people believe we dont have agi which is just mental Gymnastics my personal view is we have agi since ChatGPT3.5 and being researcher since 2010 i have benefit of knowing definition before all idealist pounced on subject when ChatGPT3.5 came out in 2022 Nov. Often older philosopher and scientists i think have more accurate definition as there has been drift in redefine agi as asi. Which is just false and makes Distinction beetwen agi and asi pointless if both refer to above human level intelligence. Scientists-philosophers were always careful pre 2022 to make hard Distinction beetween the two as agi ie equal human level is whole different beast to asi which can range from weak asi like tiny bit smarter than smartest human to strong asi which wouldn't be order magnitude above that to extremely strong asi which would have 8 billon human thoughts per thought so it's important to place down agi properly other we are going to get comfused description. Orexin (talk) 16:07, 8 September 2024 (UTC)
- would
- Orexin (talk) 16:31, 8 September 2024 (UTC)
- I agree on this point: the definition of AGI has shifted to become stricter, and is getting close to the definition of ASI. And without this shift, ChatGPT could probably already be called AGI.
- On the other side, if the popular definition has changed, Wikipedia should adapt and present the popular definition, potentially noting how it changed. If reliable sources don't say yet that AGI may already exist because it passes the Turing test, then we shouldn't say it in the article either, otherwise it's original research. Wikipedia should in general try to present the current state of knowledge, and avoid making judgments on whether older public opinions were better. Alenoach (talk) 18:35, 8 September 2024 (UTC)
- I think appealing modernity is not a good idea it can often be the case that present popular is false compared to the past. For example many in fact all mondern llm pass the turning test in fact all llms pass every test on this page yet somehow people believe we dont have agi which is just mental Gymnastics my personal view is we have agi since ChatGPT3.5 and being researcher since 2010 i have benefit of knowing definition before all idealist pounced on subject when ChatGPT3.5 came out in 2022 Nov. Often older philosopher and scientists i think have more accurate definition as there has been drift in redefine agi as asi. Which is just false and makes Distinction beetwen agi and asi pointless if both refer to above human level intelligence. Scientists-philosophers were always careful pre 2022 to make hard Distinction beetween the two as agi ie equal human level is whole different beast to asi which can range from weak asi like tiny bit smarter than smartest human to strong asi which wouldn't be order magnitude above that to extremely strong asi which would have 8 billon human thoughts per thought so it's important to place down agi properly other we are going to get comfused description. Orexin (talk) 16:07, 8 September 2024 (UTC)
- That's a thoughtful response. Here is what I think:
AGI vs ASI
[edit]Currently, the first paragraph says that AGI is something between narrow AI and ASI on the scale of intelligence. It's a reasonable perspective, which has the advantage of introducing the notion of ASI. But the difference between AGI and narrow AI is on the scale of generality, whereas the difference between AGI and ASI is on the scale of performance, so I'm not really sure whether we should say that it's "between". More importantly, I think that ASI is a subcategory of AGI (all ASIs are AGIs). It's just the threshold for ASI that is higher. Do you agree? If so, is there a good way to adjust the first paragraph to reflect this? Alenoach (talk) 18:41, 7 September 2024 (UTC)
- ASI can range from something that is 1 percent smarter than a human to planet or solar system size computers that would have 10 billion human thoughts per second. I think for that reason it's important to place as human level as clearly have there own characteristics. The question is intelligence, performance can equate to intelligence but it's possible to have slow performance and intelligence higher than humans hence it isn't always easy to say performance is the marker it can be but something that solves all tasks higher level intelligence but slower. We see this in real life too often humans exchange decrease in performance for high level thinking computations and vice versa. Computers silicon can do similar things. Orexin (talk) 16:16, 8 September 2024 (UTC)
- ASI would not be subcategory of AGI. It's more the reverse ASI would be super category as it likely encompasses a far if not infinite types of intelligence that exceed human level the lowest base being human level intelligence and its machine equivalent (AGI). And the you have various subtypes of sub human-level intelligence machine and artificial (narrow ai or non human biological intelligence like primates(our monkey cousins or our ancestors). Orexin (talk) 16:39, 8 September 2024 (UTC)
- The concept of AGI is generally defined by what it can do, rather than as a level of intelligence.[16] If AGI can do something, then I guess ASI can also do it; which suggests that ASIs are AGIs. So I think that defining AGI as an AI that "matches the spectrum of human-level intelligence" suggests an upper-limit and is not the most popular definition that is currently used across the world. It's a subtle difference from "human-level AI", for which this definition may have been good. Alenoach (talk) 19:22, 8 September 2024 (UTC)
- I restored the previous definition, but I would like to know what other contributors think and adjust if needed.
- CharlesTGillingham, we have discussed earlier this year the definition of AGI. Do you have an opinion about this? Thanks in advance. Alenoach (talk) 19:41, 8 September 2024 (UTC)
- Is there someone that has an opinion and can help us reach a consensus? (no problem if you disagree with me) The Transhumanist maybe? Alenoach (talk) 01:50, 12 September 2024 (UTC)
- Basically, the debate is about these edits Alenoach (talk) 02:37, 12 September 2024 (UTC)
- Is there someone that has an opinion and can help us reach a consensus? (no problem if you disagree with me) The Transhumanist maybe? Alenoach (talk) 01:50, 12 September 2024 (UTC)
- No this just really confused I'll show why i wrote this as one research notes when thinking about it.
- do u think type 3,4,5 ASI would be anything like AGI NO. It's definitely by intelligence, the suggest measured is observed behaviour. Saying AGI human level intelligence spectrum bounds it yes human intelligence has upset and upper limit that's the point. Orexin (talk) 02:13, 9 September 2024 (UTC)
- I have not seen in the literature definitions of AGI that set an upper limit on intelligence. So I don't think we can keep this modification. I also don't agree that the threshold for ASI starts at "marginally smarter" than the upper limits of human intelligence. The definition of superintelligence from Bostrom is "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest". The definition from Google DeepMind seems also more radical, as it implies outperforming all humans on a wide range range of non-physical tasks.[17] The definition of the of AGI and ASI on Wikipedia should follow how the term is commonly used, and I don't think it's currently the case. Alenoach (talk) 00:03, 10 September 2024 (UTC)
- Often you have to read older stuff i hate reading modern works on the topic since post November 2022 as it has been dramatically revised almost agi meaning asi. While we shouldn't self reference Wikipedia its good source terminology drift you read earliest version of this article it is really clear about it.AGI pre chatgpt was commonly referred to as human level intelligence and asi as above that. Bostrom is not only source on it and arguably asi that marginally above upper limits of human intelligence say smarter than Smartest human by 1 percent already "greatly exceeds average human intelligence" Orexin (talk) 23:35, 10 September 2024 (UTC)
- In Wikipedia, it's generally better for definitions to follow how the term is currently used, whether or not it's a good usage (like in a dictionary). Although it may be good to add one or two sentences on the "goalpost moving" and older definitions in the Terminology section.
- About ASI, if common definitions don't mention that AI marginally smarter than the smartest human is ASI, then we probably shouldn't also. And there is usually also a notion of being smarter than all humans not just in general but on virtually every domain of interest. Alenoach (talk) 00:19, 11 September 2024 (UTC)
- Often you have to read older stuff i hate reading modern works on the topic since post November 2022 as it has been dramatically revised almost agi meaning asi. While we shouldn't self reference Wikipedia its good source terminology drift you read earliest version of this article it is really clear about it.AGI pre chatgpt was commonly referred to as human level intelligence and asi as above that. Bostrom is not only source on it and arguably asi that marginally above upper limits of human intelligence say smarter than Smartest human by 1 percent already "greatly exceeds average human intelligence" Orexin (talk) 23:35, 10 September 2024 (UTC)
- I have not seen in the literature definitions of AGI that set an upper limit on intelligence. So I don't think we can keep this modification. I also don't agree that the threshold for ASI starts at "marginally smarter" than the upper limits of human intelligence. The definition of superintelligence from Bostrom is "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest". The definition from Google DeepMind seems also more radical, as it implies outperforming all humans on a wide range range of non-physical tasks.[17] The definition of the of AGI and ASI on Wikipedia should follow how the term is commonly used, and I don't think it's currently the case. Alenoach (talk) 00:03, 10 September 2024 (UTC)
- The concept of AGI is generally defined by what it can do, rather than as a level of intelligence.[16] If AGI can do something, then I guess ASI can also do it; which suggests that ASIs are AGIs. So I think that defining AGI as an AI that "matches the spectrum of human-level intelligence" suggests an upper-limit and is not the most popular definition that is currently used across the world. It's a subtle difference from "human-level AI", for which this definition may have been good. Alenoach (talk) 19:22, 8 September 2024 (UTC)
- AGI = Artificial general intelligence, is a hypothetical computer or computer program that is at least as smart as an average human, and can be as capable across all areas of human endeavor as the top specialists in every field. The problem with that definition, is that it will probably not happen that way. As soon as you provide a program with all the abilities of an average human, you will have already given it other vast capabilities before that, such as perfect memory, and access to whatever other memory and data it can be hooked up to. It'll also have the equivalence of telepathy, being able to communicate via wifi to others of its kind, at speeds far beyond how humans can communicate. The intelligence explosion, which has been foretold to happen after the advent of AGI, has actually been happening for decades. This has resulted in a supercomputer with exaflop capacity, and megacorporations with distributed computer networks and data banks that have even greater capacity than that. They are hardware that exceed the computational and storage capacity of the human brain, that are just waiting for the necessary software to be as intelligent as we are. And, by the time the software is available, those computer systems will be even more powerful, and will turn whatever human-intelligence-level programs are loaded in them into super brains immediately. Therefore, the leap will essentially be from narrow AI strait to ASI, passing over AGI in a flash. So, I wouldn't worry so much over the definitions in the article... I'd be more concerned about this: https://www.ornl.gov/news/super-speeds-super-ai-frontier-sets-new-pace-artificial-intelligence — The Transhumanist 10:31, 17 September 2024 (UTC) P.S.: @Alenoach:
- Thanks for the response. I agree with the fact that once an AI is at least as capable as humans across all areas, it will already be superhuman in many ways. That's indeed something that makes this definition less actionable. On the other side, it looks like it has become the mainstream definition, and I think Wikipedia readers want to know what the term usually means rather than what some people think it should mean. And one advantage of it is that it doesn't define AGI by its "intelligence", but by its capabilities, which is more measurable and less arbitrarily deniable.
- I think these edits are problematic because they present definitions that are not those typically used in the media, and because it makes the introduction unnecessarily verbose. I would prefer to add a paragraph in "Terminology" mentioning things like the issues with the mainstream definition, older definitions, and how the goalpost moved. Rather than saying "AGI, intelligence may be comparable to, match, differ from, or even appear alien-like relative to human intelligence ...", we could for example add this image in the subsection on the Turing test to suggest that human intelligence is not the ultimate goal.
- So I suggest to go back to previous definitions in the lead. If it helps reach a compromise, we could replace "far exceeds" by "exceeds" in the definition of ASI. And I would preferably avoid it, but we could replace "AGI is a type of AI that matches or surpasses ..." by "AGI is a type of AI that matches ...". So that the main definitions would remain simple and concise, and not explicitly diverge from the mainstream.
- Do you think it would improve the article? Alenoach (talk) 22:23, 17 September 2024 (UTC)
- @Alenoach: I was summoned here, and so, pertaining to taking a stance on the editing issue, I shall remain neutral. However, I feel free to comment on the wider situation. And that is, "What are we doing here fiddling with the semantics of an AI article, when we have the awareness that we do about AI and rapidly approaching ASI?" Rome is burning. Shouldn't we be doing something about it, like figuring out how to make Wikipedia sentient, so it can save us all? — The Transhumanist 21:32, 18 September 2024 (UTC)
- That's why semantics are important Orexin (talk) 15:56, 28 September 2024 (UTC)
- I don't think that to be the case. Wikipedia nowhere states that in reference to present popular view. That is a not said anywhere take. It should reference historical and all views popular, and I don't think appealing to pop articles, which is exactly what you are doing, is not what Wikipedia is at all u seem to be overemphasising post chatgpt definition which basically redefine agi to mean asi Orexin (talk) 16:26, 28 September 2024 (UTC)
- The goal of Wikipedia is roughly to synthesize what is said in secondary sources. There are a lot of news article nowadays, and it would be hard to find sources that defines AGI as you do. Definitions typically don't have this concept of upper threshold of intelligence above which an AI is not an AGI anymore.[18][19] Alenoach (talk) 16:46, 28 September 2024 (UTC)
- Where do you get the idea that the goal of AI is not human-level intelligence? That is the stated goal of AI, i.e. AGI, i.e. human-level intelligence. Also, that is meaningless as it doesn't tell us anything, and it's original research. I've never seen such a thing suggested anywhere in relation to the image it unsourced claim. Orexin (talk) 16:35, 28 September 2024 (UTC)
- Where did I make the claim that "the goal of AI is not human-level intelligence"? I don't think I really made a claim about what the goal of AI is. Alenoach (talk) 16:52, 28 September 2024 (UTC)
- dded 3 sources backing my definition. You gave 1 that is not improvement; that is downgrading improvement. Google is not sole authority or dictator of defining AGI. Additionally, our discussion did not resolve the points at all. Also, a minority saying it is achieved is a source from Google, the people you hold so highly, so removing it is not representing the full viewpoints of things.
- U CANNOT KEEP REVERTNG THINGS BASED ON 1 source FROM google I GAVE 3 sources Orexin (talk) 16:38, 28 September 2024 (UTC)
- The worst part is that I agree with you on many important points... But you keep pushing for edits that mainly represent your opinion, with old sources that don't really support that. The sources that you provided here don't really support your definition of AGI, with the notion of upper threshold of intelligence. I have not heard of popular definitions that defend this idea.[20][21] I really spent time trying to find common ground and proposing improvements. And you seem to lean more and more into personal attacks.
- About the part on consciousness, I think there are still issues to address. First, there should be a clear distinction between the mind-brain identity theory and functionalism. And also, this is not accurate, even assuming materialism: "According to materialism, intelligence and consciousness are inseparable, as both arise from physical interactions within the brain." Even computational functionalism doesn't say that consciousness is a necessary emergent property of intelligence. It may perhaps be the case that relatively dumb systems would be conscious, and very smart systems would not. Especially if by "consciousness" you mean "phenomenal consciousness". You said in your edit summary that I'm "not a materialist" and that I defend idealism. Actually no, I am to a large extent a materialist, although with uncertainty, since I don't completely rule out other theories like property dualism. But even if I were idealist, it shouldn't really matter to the debate.
- One issue is that some of the terms used are vague. Many readers will just not understand the term "consciousness" the way you do. If you refer to phenomenal consciousness, please use that term. Similarly, the term "dualism" is more widely used in philosophy of mind than "idealism". And "materialism" is a broad term that encompasses many incompatible theories. When you refer to functionalism or computationalism in particular, it's better to use the precise term.
- Another issue is that I'm unable to check your sources in any reasonable time. If you provide references to books that are like 40 years old, it may be partially outdated, and there may not be a free copy available on the internet. I can't afford to buy and read a book to check a single reference. And it happened twice that in one of your references, I haven't found the quote in the source (the quote to "Artificial general intelligence" from Ben Goertzel appears not identical to what is in the book which is more nuanced, and the quote about Bostrom and Yudkowsky doesn't seem to be present in the paper). It would be much better for your fellow Wikipedia contributors to provide a reference to an article on the internet, so that it's easily verifiable. Alenoach (talk) 17:44, 28 September 2024 (UTC)
- And just to support my point on consciousness being different from intelligence, here is for example what David Chalmers says, and I think most philosophers would agree:
- "Importantly, consciousness is not the same as human-level intelligence. In some respects it’s a lower bar. For example, there’s a consensus among researchers that many non-human animals are conscious, like cats or mice or maybe fish. So the issue of whether LLMs can be conscious is not the same as the issue of whether they have human-level intelligence. Evolution got to consciousness before it got to human-level consciousness. It’s not out of the question that AI might as well."[22]
- Similarly, Jonathan Birch, in The Edge of Sentience, said:
- "Proposal 22. Sentience is not intelligence (II). We should be aware of the possibility of a substantial decoupling between intelligence and sentience in the AI domain. Precautions to manage risks of suffering should be driven by markers of sentience, not markers of intelligence. For example, emulations of animal brains could achieve sentience without necessarily displaying impressive intelligence."[23]
- And I saw that you added a lot of references. But many of them, especially the ones about the definitions, don't actually support your claims. If roughly half of the references you added to the article don't support the claims, then it's not an improvement to the article, it's obfuscating the fact that it's a personal opinion. Alenoach (talk) 12:59, 29 September 2024 (UTC)
- @Alenoach: I was summoned here, and so, pertaining to taking a stance on the editing issue, I shall remain neutral. However, I feel free to comment on the wider situation. And that is, "What are we doing here fiddling with the semantics of an AI article, when we have the awareness that we do about AI and rapidly approaching ASI?" Rome is burning. Shouldn't we be doing something about it, like figuring out how to make Wikipedia sentient, so it can save us all? — The Transhumanist 21:32, 18 September 2024 (UTC)
Response to third opinion request: |
I am responding to a third opinion request for this page. I have made no previous edits on Artificial general intelligence and have no known association with the editors involved in this discussion. The third opinion process is informal and I have no special powers or authority apart from being a fresh pair of eyes. |
@Alenoach: @Orexin: From what I read and interpreted, this seems to stem from a misunderstanding of Wikipedia policy. Wikipedia is essentially a culmination of information based on reliable sources. We can't pick a certain timeframe to accept sources from; we have to use the most recent, accurate sources, whether or not we personally feel they're correct. If the meaning of something has changed in recent sources, Wikipedia should be updated to reflect that. The definition should be put with respect to that and with respect to the policy on due weight for sources. The definition "matches or surpasses human intelligence" seems to be the most widely referenced definition. — BerryForPerpetuity (talk) 18:21, 3 October 2024 (UTC) |
First sentence definition
[edit]Regarding recent reversions, I am not convinced by Orexin's quotes that AGI is authoritatively defined as human-level, but I am not aware of an authoritative alternative definition. This may simply not be a precisely defined topic, and less ambiguous terms such as "human-level" or "superintelligent" are available when clarity is needed. Can we agree that the key feature of AGI compared to other AI is the general capability across a wide range of tasks? WeyerStudentOfAgrippa (talk) 12:41, 30 September 2024 (UTC)
- I personally agree. Although I would argue for reverting this. Alenoach (talk) 00:16, 1 October 2024 (UTC)
- Regarding "AI is the general capability across a wide range of tasks":-intelligence exists as a physical brain state. Sub-human and artificial general intelligence (AGI) are distinct physical configurations of matter. Artificial superintelligence (ASI) would be a vastly more complex physical system, capable of manipulating information (physical states) to solve advanced mathematical and scientific problems with ease. The "generality" of intelligence scales with the physical complexity and information processing capabilities of the system. Hence generality is reference always to level of intelligence. Orexin (talk) 15:26, 4 October 2024 (UTC)
Reversion of Orexin's edits
[edit]I will be reverting much of the content added by Orexin, who has been indefinitely banned for edits to other articles (unconstructive edits, edit warring, personal accusations, and flooding articles with AI-generated content including many fake or vague citations, see Orexin's talk page). Alenoach (talk) 18:59, 21 October 2024 (UTC)
- A valid temporary response, but I would caution against trusting that kind of justification rather than evaluating the content itself. In this case, I'm not familiar enough to say much, but anyone can restore real sources or add them to a Template:Refideas on the talk page. WeyerStudentOfAgrippa (talk) 12:09, 22 October 2024 (UTC)
- Yes, if the content is good enough, it's better to keep it or adjust it. The reasons why I removed the added content on consciousness are: 1. The claim that materialists think intelligence automatically entail consciousness 2. a confusion between functionalism and mind-brain identity theory 3. The part on Ivan Pavlov that was mostly off-topic 4. the references are very old and hard to verify 5. excessive focus on materialism vs idealism, which does not appear so pregnant in the scientific literature and oversimplifies the debate; and usually, the terms "physicalism" and "dualism" are more often used.
- There are some chunks of text that seem however accurate and well-written, and could be reused:
- "proponents of the mind-brain identity theory, argue that consciousness is identical to brain processes. According to this view, consciousness emerges from complex neurobiological activities"
- "They (idealists) suggest that even if an AGI could mimic human intelligence, it might not possess true consciousness unless it shares in the non-physical essence that constitutes conscious experience."
- If needed, I could add a paragraph that briefly explains the main theories: functionalism, mind-brain identity theory, and dualism. It's not so easy though to make general statements on what these theories conclude about the potential of AGI for sentience (the "hard problem of consciousness"), because each has many subtheories. Alenoach (talk) 18:01, 22 October 2024 (UTC)
- B-Class level-5 vital articles
- Wikipedia level-5 vital articles in Technology
- B-Class vital articles in Technology
- B-Class neuroscience articles
- High-importance neuroscience articles
- B-Class Robotics articles
- High-importance Robotics articles
- Robotics articles needing attention
- WikiProject Robotics articles
- B-Class Computing articles
- Mid-importance Computing articles
- All Computing articles
- B-Class software articles
- Mid-importance software articles
- B-Class software articles of Mid-importance
- Unknown-importance Computing articles
- All Software articles
- B-Class Computer science articles
- Mid-importance Computer science articles
- WikiProject Computer science articles
- B-Class science fiction articles
- High-importance science fiction articles
- WikiProject Science Fiction articles
- B-Class futures studies articles
- High-importance futures studies articles
- WikiProject Futures studies articles
- B-Class Transhumanism articles
- High-importance Transhumanism articles