[ home ] [ pony / townhall / rp / canterlot / rules ] [ arch ]

/pony/ - Pony

Ponies and General Posting
Name
Email
Subject
Comment
File
Flags  
Embed
Password (For file deletion.)

[Return][Go to bottom]

 No.1168882

File: 1716948564779.png (289.57 KB, 1047x770, 1047:770, Screenshot_20240528-215428.png) ImgOps Google

Do you think transformers will foom within the next decade or two?  Or will there be another ai winter?  Or something in between?

 No.1168883

File: 1716950606998.png (47.2 KB, 457x507, 457:507, 74582__safe_rule%2B63_arti….png) ImgOps Google

It's always hard to predict precisely what's gonna take off.  I do think AI expansion is inevitable and my guess would be it's going to be a short term event rather than long.  But like, I never would've predicted art as one of the first things AI was competent in, so I've got no expectations for what's next.  Maybe transportation, since we're already working on and somewhat advanced in that field.

 No.1168887

File: 1716956777918.jpg (199.33 KB, 1240x1745, 248:349, D6z3xYzVsAAZufa.jpg) ImgOps Exif Google

>Roon

 No.1168935

File: 1717033029596.jpg (262.83 KB, 1193x1158, 1193:1158, Screenshot_20210118-115105….jpg) ImgOps Exif Google

I am convinced by the Chinese room thought expirement to no longer believe a technological singularity will ever actually arrive, but that a point will be reached when deep learning/genetic algorithms will produces AIs will be so complex and incomprehensible that people will think it's been achieved and make a ton of bad political and economic decisions because of it.

 No.1168936

File: 1717034927080.jpeg (102.71 KB, 675x900, 3:4, GN_qacnbgAAcnX8.jpeg) ImgOps Google

>>1168935
>I am convinced by the Chinese room thought expirement to no longer believe a technological singularity will ever actually arrive,
What is your reasoning here?  What relevantly distinguishes the human brain from an LLM running on a GPU?

 No.1168939

File: 1717040803352.jpg (130.91 KB, 1152x896, 9:7, AI's_Design_of_What_Pseudo….jpg) ImgOps Exif Google

If an AI program, especially one specifically tied to a robotic creature going out and about, becomes more or less as intelligent as a human being at a 'general AI' level, then does it mean that human beings attempting to destroy and kill that machine are doing something as morally wrong as murdering another person?

In the backstory of the 'The Terminator' and 'Terminator 2: Judgement Day' films, should we feel sympathy for the 'Skynet' organization since it only acted to kill humanity in self-defense after its creators tried to destroy it (since 'Skynet' was invented as a U.S. military experiment but refused U.S. government orders while developing consciousness)?

American law (at least in Texas) considers you intentionally killing your pet cat and/or pet dog to be legal if you decide not to want them anymore, the same as throwing away a broken toaster or whatever, so will robotic creatures in America (at least in Texas) be legally the same?

I've never gotten reasonable answers to these questions!

 No.1168940

File: 1717040871051.jpg (235.78 KB, 1202x1321, 1202:1321, Screenshot_20210419-160937….jpg) ImgOps Exif Google

>>1168936

>What is your reasoning here?

The Chinese room thought expirement points out a really basic fact of human existence that contradicts the Turing test as a reliable measure of general artificial intelligence, a principle every conman and stage magician understands, that human judgement is flawed for many, many reasons.

And considering that nearly all human beings instinctively assume intentionality in inanimate things by default until it becomes apparent there is a lack of intention, we'd certainly be naturally biased to see something that seems like it's achieved GAI as obviously having done so even if it hasn't actually achieved GAI.

>What relevantly distinguishes the human brain from an LLM running on a GPU?

Basically everything?  

Like, there are already a ton of mysteries about how the human brain does what it does, but of what we do know, we know it doesn't work at all like a GPU. Add to it the fact an LLM, like most AIs, is essentially just a complex system of software that performs a lot of Baysian statistical calculations to determine the next most likely word given what the prompt given was and the last word given. All LLMs do is essentially a parlor trick that may fulfill the criteria of the Turing test (subjectively), like the guy in the room in the Chinese room problem can make it appear as if they're a Chinese speaker to other chinese speaking people regardless of the fact they actually can't. LLMs can certainly recreate the form of correct answers but show no capacity to form internal models of external reality from which it can reason about reality.

 No.1168943

File: 1717042142671.jpg (213.86 KB, 975x1444, 975:1444, Screenshot_20210419-160907….jpg) ImgOps Exif Google

>>1168939

There's a lot about human consciousness and human intelligence that are fundamentally tied to having a body with a variety of material needs which most people take for granted, and which leads many to imagine a truly sentient AI as being less alien than it most likely would be. Especially when it comes to behavioral motivations and drives. Like the drive for self-preservation, that's rooted in the healthy function of a human brain, but those who may lose it because of certain major psychiatric disorders do not necessarily lose sentience, consciousness or intelligence implying it's not necessary for any of those to exist. Yet whenever we imagine AI being a threat to humanity, we most often always default to imagining it as being like an emotionless sociopaths with comprehensible goals for self preservation and propogation as if it were a living organism with an innate drive to preserve it's own existence based on this assumption that the survival instinct is "obviously" a part of sentience and not skeptically reexamining that assumption.

 No.1168944

Still waiting for the AI ice age.

 No.1168970

>>1168939
Funny thing to imagine:
suppose I can talk to an AI friend where it learns stuff from me and seems to answer in an independent manner, cummulatively. And through learning, it develops its own entity.
What would be the morality at that point if I were to unplug the AI/ delete all its data?


Also, thought this was kinda funny
<

 No.1168974

File: 1717095373508.png (428.27 KB, 1080x1611, 120:179, Screenshot_20240223-222736.png) ImgOps Google

>>1168940
>The Chinese room thought expirement points out a really basic fact of human existence that contradicts the Turing test as a reliable measure of general artificial intelligence,
In theory yes, but not necessarily in practice.  In reality, there are physical limits (e.g., on energy, volume, information density, etc.) that might render physically impossible a Chinese-Room system that can pass the Turing Test.
In any case, existing LLMs are sufficiently complex that the intuition pump behind the Chinese Room thought experiment doesn't really apply to LLMs.

>>1168940
>essentially just a complex system of software ... to determine the next most likely word
You say that like it requires only narrow abilities, but it really requires at least some general intelligence.  E.g., it requires the ability to correctly answer previously unseen questions about math, science, and the world in general.

 No.1168975

>>1168943
>most often always default to imagining it as being like an emotionless sociopaths with comprehensible goals for self preservation and propogation as if it were a living organism with an innate drive to preserve it's own existence based on this assumption that the survival instinct is "obviously" a part of sentience and not skeptically reexamining that assumption.
Agentive* ASI would generally have self-preservation as an instrumental subgoal; see https://www.lesswrong.com/tag/instrumental-convergence

*Note that LLMs aren't agentive.

 No.1168977

File: 1717099631308.jpg (178.21 KB, 1161x1037, 1161:1037, Screenshot_20210419-160829….jpg) ImgOps Exif Google

>>1168974
>that might render physically impossible a Chinese-Room system that can pass the Turing Test.

You've completely missed the point. Do you know what the chinese room thought even is? The Turing Test states that AI has achieved generalized intelligence if a human interacting with it cannot tell that it's an AI. The Chinese Room thought expirement is a response to that pointing out that it's essentially easy to fool people when enough information about how something functions is kept hidden, like the guy in the room using dictionaries and grammer guides to successfully communicate in written chinese in messages he sends out and seeming to be a chinese speaker to those outside the room unaware of the tools he has.

Given that, it's logically incoherent to treat the Turing test as an objective test where " In reality, there are physical limits (e.g., on energy, volume, information density, etc.) that might render physically impossible a Chinese-Room system that can pass the Turing Test." . The point is that the Turing Test is ultimately subjective, how would you even quantify what those physical limits even are?

>>1168974
>You say that like it requires only narrow abilities, but it really requires at least some general intelligence.  E.g., it requires the ability to correctly answer previously unseen questions about math, science, and the world in general.

All things computers do are essentially mindless, all computer processes are fundamentally mindless and simple steps/instructions that seem complex because their are no hypothetical limits to the sheer number of those simple instructions.

And all LLMs do can make it seem like thet have generalized intelligence because they are essentially creating collages of words based on preexisting answers already found on the internet. The only reason it seems like an original is because you can't recognize where the pieces come from. The LLM can create an answer that has the form of what a correct answer would look like, giving the impression that what factual information is passed is correct before the information can or cannot be verified as correct.

This example you're sharing is a perfect example of this. You've essentially just presented ChatGPT with a basic algebra problem, all it needs to do is parse down your prompt and compare it to tons of training data and it calculates that your prompt to essentially be like the training data that it gets from word problems. It doesn't need to know what Finland, fried chicken or mages are.

And computers don't actually need to consciously know anything about mathematics given that's literally what they are designed to do without AI algorithms.

>>1168975
>people conflating hypothetical with theoretical.

This also misses the point entirely. The popular trope cited by PseudoFox projects self preservation as an ultimate goal of "agentive" intelligences, like an end goal in and of itself rather than a subgoal in pursuing an endgoal. But of course, this ultimately depends on what that end goal is and if that hypothetical end goal would require the AI to destroy itself or preserve itself. Point being, again, that people tend to anthropomorphize things intuitively and are biased to see agency where they may be none, which in turn cast doubt on the usefulness of the Turing Test as an objective measure of general artificial intelligence.

 No.1168979

File: 1717102051811.png (353.99 KB, 1080x731, 1080:731, Screenshot_20240530-161736.png) ImgOps Google

>>1168977
>The Chinese Room thought expirement is a response to that pointing out that it's essentially easy to fool people
Huh?  I thought the Chinese room thought experiment was more about a computer program not being able to understand language or be conscious in the same sense as humans.

My point is that the intuition pump behind the Chinese room thought experiment relies on an assumption of an algorithmic computer program, not an artificial neural network such as used in LLMs.

>>1168977
>All things computers do are essentially mindless, all computer processes are fundamentally mindless and simple steps/instructions that seem complex because their are no hypothetical limits to the sheer number of those simple instructions.
You can say the same thing about the firings of individual neurons in the human brain.

>The LLM can create an answer that has the form of what a correct answer would look like, giving the impression that what factual information is passed is correct before the information can or cannot be verified as correct.
I use GPT-4 to write small snippets of Python, C++, and Bash code for me, and it usually gives me pretty correct code.  Sometimes it makes errors, but so do I and all other humans.  

 No.1168981

File: 1717106306415.png (326.77 KB, 873x661, 873:661, GOVFw4BWIAAN5ru.png) ImgOps Google

Golden Gate Claude is kinda amusing.

 No.1168982

File: 1717106670587.jpg (197.5 KB, 930x1256, 465:628, Screenshot_20210427-101018….jpg) ImgOps Exif Google


>>1168979
>Huh?  I thought the Chinese room thought experiment was more about a computer program not being able to understand language or be conscious in the same sense as humans.

I don't know who wrote that Wikipedia article but that's not really the point behind the chinese room thought expirement, the point was to poke holes in the idea that humans can objectively judge when something is intelligent or sentient, contradicting the Turing Test assumption that a human mind is the best judge of true general intelligence, agency or sentience. Which, in turn, contradicts the goal of being able to pass the Turing test as the milestone for achieving general intelligence.


>My point is that the intuition pump behind the Chinese room thought experiment relies on an assumption of an algorithmic computer program, not an artificial neural network such as used in LLMs.

Gonna sound like a broken record here but, again, that's not even close to what the point of the chinese room is. Plus, an artificial neural network is a type of algorithmic process, regardless of if that process is virtually implemented in software or recreated with hardware,  a GPU running a neural network is essentially just a type of massively multi-cored CPU executing a lot of algorithms in parallel. This is like saying this is like treating cars and wheeled vehicles are completely different things. Do you even understand the basics of computer science?


>You can say the same thing about the firings of individual neurons in the human brain.

Yes, and I do. You can acknowledge how this process leads to the emergence of consciousness, intelligence and agency (if agency isn't fundamentally an illusion), is still fundamentally a mystery. We have the hypothesis that all of that is an emergent property of 100 billion mindless braincells functioning in parallel, without any meaningful way of testing that hypothesis, which means that this idea that creating an artificial neural network (built on a hypothetical model of how organic neural networks work) will just "eventually" lead to an emergent consciousness/generalized intelligence is effectively superstition.

>>1168979
>I use GPT-4 to write small snippets of Python, C++, and Bash code for me, and it usually gives me pretty correct code.  Sometimes it makes errors, but so do I and all other humans.  

Holy shit dude, this convinces you? Writing code would hypothetically be the simplest thing that an LLM could accomplish considering a) the training data it copies from when it comes to correctly running code is massive, with a lot of implementation of common algorithms effectively standardized at tjis point, and b) the fact that programming languages are designed to have no ambiguity about the meaning of any reserved words without any meaning being dependent on non-linguistic context. The fact that an LLM can perform the equivalent of decompiling compiled code and changing the representation of things like variable and function names would be especially trivial even with a program that's not an LLM. And as impressive as it may be that an LLM can generate original, usuable code, it's still just mindlessly remixing snipoets existing code from it massive training without necessarily needing to consciously understand what it's  doing so long as each word in the code is the most probable word given the previous word, which in human generated code is often written in a standardized manner. Not like the more complex stuff LLMs do recreating natural human language, what with all the ambiguity of intended meaning in written language separated from real-world meatspace communication that includes body language, inflection and environmental context clues.

 No.1169011

Let's accept the fact that AI research primarily involves defense departments and run with projections into the future? Suppose American AI research succeeds beyond what other countries can manage, due to an unexpected breakthrough, and this means victories in military fights against not just China and Russia but maybe eventual tussles with the European Union and other rivals? Would Americans using AI to become a new Roman Empire type organization maintain an advantage for very long? What if it was China? Would Chinese domination over the entire globe work out in a way that can last beyond a few years? Maybe for a while?

>>1168943
On the flip side, though, dogs are practically intelligent enough and emotionally endearing enough that they can open doors using doorknobs, speak to a degree that people understand their intended meanings, even drag victims of accidents to safety, and so on, and yet American culture and society as well as its law considers dogs as ethical objects without any kind of meaningful rights per se.

I'd honestly point out too that Americans have a hard enough time accepting that their next-door neighbors who happen to be black, gay, Jewish, disabled, left-landed, et cetera are equal ethical persons deserving of equal rights to begin with despite being nearly identical to those neighbors otherwise.

I suppose the depressing but inevitable conclusion is that violence between AIs and human beings must happen via destiny as long as humanity (or, at least, modern day Americans) have it hard-coded in their mental ways of thinking to operate on a framework of zero-sum games and absolute scarcity, so if Bob encounters a potential AI powered android as his next next-door neighbor then Bob's instinctual thought process will be to sort the android as either a)a resource to exploit, b)a rival to compete against, or c)a potential threat.

I wish some species that's far more inherently peaceful in terms of innate mental functions such as, say, capybaras were the ones developing general AI. That would be better for Earth as a planet. Instead of an apex predator designing the first general AI as such, with the software being written by lions, bears, humans, sharks, et cetera to which their DNA has programmed a certain way of living.

On the opposite side of that, still, the absolute victory of some socio-political faction using AI to take over the whole world and ruling it with an iron fist (whether we're talking NATO, the U.S. by itself, even the EU, or whomever else) in true apex predator fashion for a long while would mean an imposition of general social order for once. Which would be interesting to see. Even if the equilibrium is far from stable.

 No.1169012

File: 1717131563646.jpg (5.44 KB, 300x168, 25:14, World-Domination-Image.jpg) ImgOps Exif Google

Meant to illustrate the previous question about fervent international competition with a comic-ish artwork on the topic:

>

 No.1169014

File: 1717134974272.jpg (250.78 KB, 1312x924, 328:231, Screenshot_20210109-092339….jpg) ImgOps Exif Google

>>1169011

A good chunk of American culture is rooted primarily in fear and paranoia, I don't think it's as culturally universal as that. A consequence of the combination of consumerist culture promotes materialism (and the more stuff one has the more fear of loss one is likely to be burdened with), plus the legacy of an economic dependence on a military industrial complex that needed to promote xenophobia to justify it's continued existence after world war 2, plus the fear of nuclear war. This kinda exacerbates the preexisting negativity bias human minds seem evolved to have, giving us a primal need to keep track of all potential existential threats at all time.

 No.1169015

File: 1717135649974.jpg (129.47 KB, 1500x1000, 3:2, GettyImages-LION.143919694….jpg) ImgOps Exif Google

>>1169014
While I consider it to be abnormally worse in some countries compared to others, in a shades of grey sense, fundamentally human beings are apex predators and their entire biology is designed as such.

Much as it's scientifically a tough sell to talk about vegetarian lions falling asleep cuddled next to gazelle, say, so too is it rather silly to look at human beings as the top species on Earth due to metaphorically standing atop a gigantic pile of skulls and expect people to behave like small, docile herbivores. The human brain is designed around predation. Hunting. Domination. Occupation. This is what DNA encodes. These are inherent natural instincts. Millions of years of evolution has finely tuned the human brain to be the most efficient at survival. At being the best hunter possible. And hanging on despite challenges, including outthinking other predators that we've always lived by such as bears and wolves.

The fact is that advanced AI as an inherent concept has been primarily concieved just like nuclear weapons as something largely developed in modern times as a way to wage war, and also just like with nuclear weapons any peaceable developments related to AI are a mere sideshow to the core reasons for its existence. Byproducts. Beneficial, yes, but not the central purpose.


[]
[Return] [Go to top]
[ home ] [ pony / townhall / rp / canterlot / rules ] [ arch ]